I am currently Debugging a Software that I translated from C++ to C#.
Everything goes fine until one values in C# is different from C++.
the computation is :
v2z = vBlock.z * cos(phi) + v1x * sin(phi);
where
phi = 0.800, vBlock.z =-3.6127196945104552 and v1x= 4.5158996181380688
the result i C++ is
4.4408920985006262e-16
in C# the result is
0
And that is the Problem because in my condition later I need a value v2z >0.
After reading Topics on that subject I found These one interesting Formatting doubles for output in C#.
I thought I could use the DoubleConverter.ToexactString() onto my C# values but that doesn't worked.
Do you have any Ideas how to get exact the same values in C# as in C++?
Related
In c++, one can use std::cout << std::hexfloat << someFloatValue to print floating point values in base 16, instead of the usual base 10. For example, 0.09375 (i.e. 3/32) would be printed as "0x1.8p-4"
It should not print "0x3DC00000"! This is simply be type-punning the float to an int and printing that in hexadecimal. That's not what I'm looking for!
I am looking specifically for the C++ "hexfloat" format.
I cannot find anything similar in .NET. I need to print some floating point vaules from a VB.NET program (but that's not required, C# works too) that are later read by a C++ program. Ideally, I would also like to be able to parse those back into my VB.NET program.
How do I correctly print hexfloats from .NET? I would rather avoid rolling my own algorithm, if there is already something in the framework. If not, I would like to avoid platform-dependent code, e.g. plonking the float into a union ([StructLayout(LayoutKind.Explicit)] struct) and inspecting the underlying bits through that.
So this may be obvious but i have recently inherited some legacy code and scattered around the code are array indexes like this
someArray(&H7D0)
I get that this "&H7D0" is the index but how do i go about changing it to a real number as i am converting the code to C#.
the code is a mess and it's not obvious what it might be.
This is a Hexidecimal number. The equivalent C# would be someArray(0x7d0)
Both are equivalent to the decimal number 2000 so you could actually write someArray(2000) to allow the code to be used in both languages.
I need to translate this C# code from NReplayGain library here https://github.com/karamanolev/NReplayGain to a working VBNET code.
TrackGain trackGain = new TrackGain(44100, 16);
foreach (sampleSet in track) {
trackGain.AnalyzeSamples(leftSamples, rightSamples)
}
double gain = trackGain.GetGain();
double peak = trackGain.GetPeak();
I've translate this:
Dim trackGain As New TrackGain(samplerate, samplesize)
Dim gain As Double = trackGain.GetGain()
Dim peak As Double = trackGain.GetPeak()
Use an online converter. C# to VB converters:
dotnet Spider
SharpDevelop
teletrik.
developerFusion
Your c# code shown above has errors. Probably it is written in pseudo code. I have not found any declaration of a sample set at the github address you mentioned.
A semicolon is missing (inside the loop). The loop variable sampleSet is not declared. Where do leftSamples and rightSamples come from? The loop variable is not used inside the loop. Probably the left and right samples are part of the sampleSet. If I correct this, I can convert the code by using one of these online converters.
C#:
TrackGain trackGain = new TrackGain(44100, 16);
foreach (SampleSet sampleSet in track) {
trackGain.AnalyzeSamples(sampleSet.leftSamples, sampleSet.rightSamples);
}
double gain = trackGain.GetGain();
double peak = trackGain.GetPeak();
VB:
Dim trackGain As New TrackGain(44100, 16)
For Each sampleSet As SampleSet In track
trackGain.AnalyzeSamples(sampleSet.leftSamples, sampleSet.rightSamples)
Next
Dim gain As Double = trackGain.GetGain()
Dim peak As Double = trackGain.GetPeak()
After all, the two versions don't look that different!
It is fairly simple to reference within assemblies written in different languages.
I frequently reference C# code from F# and have referenced VB.NET code from C#.
Just be sure to compile both projects to target the same framework version, say .NET 4.5 or Mono 2.10 , and CPU architecture.
If you need the files to reside in the same assemblies. I would suggest you study the C# syntax and convert it manually.
Edit: After browsing the Repository, I only see a handful of classes.
Besides learning new languages is a great way to improve both your ability to write code and read code in the languages you are already comfortable with.
A good one online solution to translate .NET to C# and vice-versa, to another language, as JavaScript is CodeTranslator - Carlossag. Until now, I didn't have problems with this translator.
I have some code that is behaving strangely and seems to be rounding the result of the addition of two double values. This is causing issues with my code.
Unfortunately I cannot fix this issue because my unit testing environment is working OK (not rounding) and my application is not OK (rounding).
In my test environment:
a = -3.7468700408935547
b = 525218.0
c = b + a
c = 525214.25312995911
Inside my application:
a = -3.7468700408935547
b = 525218.0
c = b + a
c = 525214.25
What can be causing this? Project config? (I'm using visual studio, btw)
Edit (from comments)
I'm stepping through the same code using the Visual Studio debugger, so it's the exact same piece of code.
I have more code but I narrowed the problem down to that particular sum.
The binary representations of each value are:
test environment:
System.BitConverter.ToString(System.BitConverter.GetBytes(a)) "00-00-00-00-97-F9-0D-C0" string
System.BitConverter.ToString(System.BitConverter.GetBytes(b)) "00-00-00-00-44-07-20-41" string
System.BitConverter.ToString(System.BitConverter.GetBytes(c)) "00-40-9A-81-3C-07-20-41" string
inside application:
System.BitConverter.ToString(System.BitConverter.GetBytes(a)) "00-00-00-00-97-F9-0D-C0" string
System.BitConverter.ToString(System.BitConverter.GetBytes(b)) "00-00-00-00-44-07-20-41" string
System.BitConverter.ToString(System.BitConverter.GetBytes(c)) "00-00-00-80-3C-07-20-41" string
Edit 2:
As Alexei Levenkov points out, this issue is caused by a library that changes the FPU config.
For anyone who is curious what this meant for me:
I was able to mitigate this issue for my particular piece of code, by making some assumptions about my input values and doing some preemptive values rounding which in turn made my calculations consistent.
Your application may be doing something strange with configuration of FPU. I.e. using some random library for math which reconfigures precision...
Direct3d is possible suspect, see for example Pow implementation for double.
Use decimal if one wants preciscion such as needed in financial calculations.
Edit
See:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Five Tips for Floating Point Programming
I've been playing with Script#, and I was wondering how the C# numbers were converted to Javascript. I wrote this little bit of code
int a = 3 / 2;
and looked at the relevant bit of compiled Javascript:
var $0=3/2;
In C#, the result of 3 / 2 assigned to an int is 1, but in Javascript, which only has one number type, is 1.5.
Because of this disparity between the C# and Javascript behaviour, and since the compiled code doesn't seem to compensate for it, should I assume that my numeric calculations written in C# might behave incorrectly when compiled to Javascript?
Should I assume that my numeric calculations written in C# might behave incorrectly when compiled to Javascript?
Yes.
Like you said, "the compiled code doesn't seem to compensate for it" - though for the case you mention where a was declared as an int it would be easy enough to compensate by using var $0 = Math.floor(3/2);. But if you don't control how the "compiler" works you're in a pickle. (You could correct the JavaScript manually, but you'd have to do that every time you regenerated it. Yuck.)
Note also that you are likely to have problems with decimal numbers too due to the way JavaScript represents decimal places. Most people are surprised the first time they find out that JavaScript will tell you that 0.4 * 3 works out to be 1.2000000000000002. For more details see one of the many other questions on this issue, e.g., How to deal with floating point number precision in JavaScript?. (Actually I think C# handles decimals the same way, so maybe this issue won't be such a surprise. Still, it can be a trap for new players...)