Looking for an easier way to use floats in C# - c#

For a number of reasons, I have to use floats in my code instead of doubles. To use a literal in my code, I have to write something like:
float f = 0.75F;
or the compiler will barf since it treats just "0.75" as a double. Is there anything I can put in my code or set in Visual Studio that will make it treat a literal like "0.75" as a float without having to append an "F" every time?

No - fortunately, IMO. Literals are treated the same way everywhere.
This is a good thing - imagine some maintenance developer comes and looks at your code in a year's time. He sees "0.75" and thinks "I know C# - that's a double! Hang on, how is it being assigned to a float variable?" Ick.
Is it really so painful to add the "F" everywhere? Do you really have that many constants? Could you extract them as constant values, so all your "F-suffixed" literals are in the same place.

FYI -- you can find all of the compiler options for C# at http://msdn.microsoft.com/en-us/library/6ds95cz0.aspx. If you check there, you'll see that there isn't any option that allows this -- and rightly so for the reasons that #Jon Skeet noted.

The language interprets floating point precision literals as doubles everywhere. This is not a configurable feature of the compiler - and with good reason.
Configuring how the language interprets you code would lead to problems with both compatibility and the ability of maintenance developers to understand what the code means.
While not advisable generally, you can reduce the pain a little in C# 3 by using:
var f = 0.75F;
Just be careful, because forgetting the 'F' suffix with this syntax WILL cause the compiler to create a double, not a float.

Float comes with an F :-)

I would advise you to always use
var meaning = 1f;
because the "var" keyword saves a lot of human interpretation and maintenance time.

The proper behavior would not be for a compiler to interpret non-suffixed literals as single-precision floats, but rather to recognize that conversions from double to float should be regarded as widening conversions since, for every double value, there is either precisely one unambiguously-correct float representation, or (in a few rare edge cases) there will be precisely two equally-good values, neither of which will be more than a part per quadrillion from being the unambiguously-correct value. Semantically, conversions from float to double should be regarded as narrowing conversions (since they require that the compiler "guess" at information it doesn't have), but the practical difficulties that would cause might justify making conversions in that direction 'widening'.
Perhaps one should petition Microsoft to add a widening conversion from double to float? There's no good reason why code which calculates graphics coordinates as double should be cluttered with typecasts when calling drawing functions.

Related

Is it okay to hard-code complex math logic inside my code?

Is there a generally accepted best approach to coding complex math? For example:
double someNumber = .123 + .456 * Math.Pow(Math.E, .789 * Math.Pow((homeIndex + .22), .012));
Is this a point where hard-coding the numbers is okay? Or should each number have a constant associated with it? Or is there even another way, like storing the calculations in config and invoking them somehow?
There will be a lot of code like this, and I'm trying to keep it maintainable.
Note: The example shown above is just one line. There would be tens or hundreds of these lines of code. And not only could the numbers change, but the formula could as well.
Generally, there are two kinds of constants - ones with the meaning to the implementation, and ones with the meaning to the business logic.
It is OK to hard-code the constants of the first kind: they are private to understanding your algorithm. For example, if you are using a ternary search and need to divide the interval in three parts, dividing by a hard-coded 3 is the right approach.
Constants with the meaning outside the code of your program, on the other hand, should not be hard-coded: giving them explicit names gives someone who maintains your code after you leave the company non-zero chances of making correct modifications without having to rewrite things from scratch or e-mailing you for help.
"Is it okay"? Sure. As far as I know, there's no paramilitary police force rounding up those who sin against the one true faith of programming. (Yet.).
Is it wise?
Well, there are all sorts of ways of deciding that - performance, scalability, extensibility, maintainability etc.
On the maintainability scale, this is pure evil. It make extensibility very hard; performance and scalability are probably not a huge concern.
If you left behind a single method with loads of lines similar to the above, your successor would have no chance maintaining the code. He'd be right to recommend a rewrite.
If you broke it down like
public float calculateTax(person)
float taxFreeAmount = calcTaxFreeAmount(person)
float taxableAmount = calcTaxableAmount(person, taxFreeAmount)
float taxAmount = calcTaxAmount(person, taxableAmount)
return taxAmount
end
and each of the inner methods is a few lines long, but you left some hardcoded values in there - well, not brilliant, but not terrible.
However, if some of those hardcoded values are likely to change over time (like the tax rate), leaving them as hardcoded values is not okay. It's awful.
The best advice I can give is:
Spend an afternoon with Resharper, and use its automatic refactoring tools.
Assume the guy picking this up from you is an axe-wielding maniac who knows where you live.
I usually ask myself whether I can maintain and fix the code at 3 AM being sleep deprived six months after writing the code. It has served me well. Looking at your formula, I'm not sure I can.
Ages ago I worked in the insurance industry. Some of my colleagues were tasked to convert the actuarial formulas into code, first FORTRAN and later C. Mathematical and programming skills varied from colleague to colleague. What I learned was the following reviewing their code:
document the actual formula in code; without it, years later you'll have trouble remember the actual formula. External documentation goes missing, become dated or simply may not be accessible.
break the formula into discrete components that can be documented, reused and tested.
use constants to document equations; magic numbers have very little context and often require existing knowledge for other developers to understand.
rely on the compiler to optimize code where possible. A good compiler will inline methods, reduce duplication and optimize the code for the particular architecture. In some cases it may duplicate portions of the formula for better performance.
That said, there are times where hard coding just simplify things, especially if those values are well understood within a particular context. For example, dividing (or multiplying) something by 100 or 1000 because you're converting a value to dollars. Another one is to multiply something by 3600 when you'd like to convert hours to seconds. Their meaning is often implied from the greater context. The following doesn't say much about magic number 100:
public static double a(double b, double c)
{
return (b - c) * 100;
}
but the following may give you a better hint:
public static double calculateAmountInCents(double amountDue, double amountPaid)
{
return (amountDue - amountPaid) * 100;
}
As the above comment states, this is far from complex.
You can however store the Magic numbers in constants/app.config values, so as to make it easier for the next developer to maitain your code.
When storing such constants, make sure to explain to the next developer (read yourself in 1 month) what your thoughts were, and what they need to keep in mind.
Also ewxplain what the actual calculation is for and what it is doing.
Do not leave in-line like this.
Constant so you can reuse, easily find, easily change and provides for better maintaining when someone comes looking at your code for the first time.
You can do a config if it can/should be customized. What is the impact of a customer altering the value(s)? Sometimes it is best to not give them that option. They could change it on their own then blame you when things don't work. Then again, maybe they have it in flux more often than your release schedules.
Its worth noting that the C# compiler (or is it the CLR) will automatically inline 1 line methods so if you can extract certain formulas into one liners you can just extract them as methods without any performance loss.
EDIT:
Constants and such more or less depends on the team and the quantity of use. Obviously if you're using the same hard-coded number more than once, constant it. However if you're writing a formula that its likely only you will ever edit (small team) then hard coding the values is fine. It all depends on your teams views on documentation and maintenance.
If the calculation in your line explains something for the next developer then you can leave it, otherwise its better to have calculated constant value in your code or configuration files.
I found one line in production code which was like:
int interval = 1 * 60 * 60 * 1000;
Without any comment, it wasn't hard that the original developer meant 1 hour in milliseconds, rather than seeing a value of 3600000.
IMO May be leaving out calculations is better for scenarios like that.
Names can be added for documentation purposes. The amount of documentation needed depends largely on the purpose.
Consider following code:
float e = m * 8.98755179e16;
And contrast it with the following one:
const float c = 299792458;
float e = m * c * c;
Even though the variable names are not very 'descriptive' in the latter you'll have much better idea what the code is doing the the first one - arguably there is no need to rename the c to speedOfLight, m to mass and e to energy as the names are explanatory in their domains.
const float speedOfLight = 299792458;
float energy = mass * speedOfLight * speedOfLight;
I would argue that the second code is the clearest one - especially if programmer can expect to find STR in the code (LHC simulator or something similar). To sum up - you need to find an optimal point. The more verbose code the more context you provide - which might both help to understand the meaning (what is e and c vs. we do something with mass and speed of light) and obscure the big picture (we square c and multiply by m vs. need of scanning whole line to get equation).
Most constants have some deeper meening and/or established notation so I would consider at least naming it by the convention (c for speed of light, R for gas constant, sPerH for seconds in hour). If notation is not clear the longer names should be used (sPerH in class named Date or Time is probably fine while it is not in Paginator). The really obvious constants could be hardcoded (say - division by 2 in calculating new array length in merge sort).

Dividing integer types - Are results predictable?

I have a 64-bit long that I want to round down to the nearest 10,000, so I am doing a simple:
long myLong = 123456789
long rounded = (myLong / 10000) * 10000; //rounded = 123450000
This appears to do what I expect, but as I'm not 100% on the internals of how integer types get divided, I am just slightly concerned that there may be situations where this doesn't work as expected.
Will this still work at very large numbers / edge cases?
Yes, it will work, so long as no result, intermediate or otherwise, exceeds long.MaxValue.
To be explicit about your constants you could use the L specifier at the end, e.g. 123456789L.
For straightforward calculations like this, can I suggest Pex from Microsoft ( http://research.microsoft.com/en-us/projects/pex/ ), which looks for edge cases and tests them. This is a clean-cut example, but if you were building up lots of logic based on things you are unsure of, it's a great tool.
Yes, it will work. The semantics of integer division guarantee what you expect.
However it may be good to write some tests for your specific use case, including edge cases. This will reassure you.

How well do Script# numbers map to Javascript?

I've been playing with Script#, and I was wondering how the C# numbers were converted to Javascript. I wrote this little bit of code
int a = 3 / 2;
and looked at the relevant bit of compiled Javascript:
var $0=3/2;
In C#, the result of 3 / 2 assigned to an int is 1, but in Javascript, which only has one number type, is 1.5.
Because of this disparity between the C# and Javascript behaviour, and since the compiled code doesn't seem to compensate for it, should I assume that my numeric calculations written in C# might behave incorrectly when compiled to Javascript?
Should I assume that my numeric calculations written in C# might behave incorrectly when compiled to Javascript?
Yes.
Like you said, "the compiled code doesn't seem to compensate for it" - though for the case you mention where a was declared as an int it would be easy enough to compensate by using var $0 = Math.floor(3/2);. But if you don't control how the "compiler" works you're in a pickle. (You could correct the JavaScript manually, but you'd have to do that every time you regenerated it. Yuck.)
Note also that you are likely to have problems with decimal numbers too due to the way JavaScript represents decimal places. Most people are surprised the first time they find out that JavaScript will tell you that 0.4 * 3 works out to be 1.2000000000000002. For more details see one of the many other questions on this issue, e.g., How to deal with floating point number precision in JavaScript?. (Actually I think C# handles decimals the same way, so maybe this issue won't be such a surprise. Still, it can be a trap for new players...)

.NET - Why is there no fixed point numeric data type in C#?

It seems like there would be a ton of uses for a fixed point data type. Why is there not one in .NET?
Note: I understand we can create our own classes/structs to suit our fixed point purposes and needs. That's not my question. I want to know WHY MS decided not to include a fixed point numeric data type.
You're looking for the little-known System.Data.SqlTypes.SqlDecimal class.
Decimal (base-10 floating point) was deemed to be good enough.
One problem probably has to do with the question: where do you fix the point? A type in .NET cannot be parametrized by other arguments than types, so FixedNum<18,6> is simply not possible. And you do not want to create FixedNum1x0, FixedNum1x1, FixedNum2x0, FixedNum2x1, FixedNum2x2, etc.
You need to be able to parametrize your fixed point type, not just values, because that would lead to nigh impossible to track mistakes:
FixedNum f() { return new FixedNum(1, decimals: 2); }
FixedNum x = new FixedNum(1, decimals: 0);
...
x = f(); // precision of x increased.
So you'd need to check and constrain your fixed point values every time you get them from something that's not a local variable. As you do with decimal when you want a fixed scale or precision.
In other words, given the limitations of the .NET type system, decimal is already built-in implementation of the FixedNum class above.

Using structs in C# for simple domain values

I am writing a financial application where the concept of 'Price' is used a lot. It's currently represented by the C# decimal type. I would like to make it more explicit and be able to change it to maybe double in the future, so I was thinking of creating a 'Price' struct that would basically act exactly the same as the decimal type (maybe add a bit of validation like must be greater than 0).
What do you think are the pros and cons of doing this?
Please don't use double for money. You'll have to remember to round it for display everywhere you use it at, and you have potential accuracy issues if you divide or multiply by large numbers. Decimal will give overflow errors, double will just lose accuracy. I'm not sure about you, but with money, I'd prefer an error and aborted operation to silently proceeding with a loss of accuracy.
If anything, based on projects I've been on, you may want to consider using a struct that has a decimal and some indication of what currency it is.
Structs should be used for small types that will (in my opinion) be immutable, i.e., value types. I am not sure what you mean by "used a lot", but if these structs will be passed around a lot in performance critical operations you will have to take into account the price of copying them versus the price of heap allocation. I doubt you will need to take that into account, but it is something to think about. I rarely find the need to use structs in my daily activities.
Also, as Jonathan points out, using the double type for money is a bad idea. The decimal type is much better suited to financial calculations.
Yet another aside; you will probably hear a lot of responses which talk about stack v heap allocation, so this article may interest you:
http://blogs.msdn.com/ericlippert/archive/2009/04/27/the-stack-is-an-implementation-detail.aspx
There shouldn't be a reason to change the data type for a quantity like this; however, you may decide to add other information such as the currency or the number of decimal places to keep track of in calculations, so using a struct at this point will save you a LOT of time down the road.
Structs may not be so accessible from .NET languages other than C#. Rounding errors could be a problem too. Why not just create a Money class and store the value as a Decimal and the currency used.

Categories