Strange arithmetic error with vectors - c#

I am using DirectX to draw on screen.
I want to make the image about whilst keeping the dimensions, so I am assigning some arithmetic to the vertexes:
float boxPosFactorX = (869-3)+(100/100 * (1063 - 869));
float boxPosFactorY = (940-3)+(100/100 * (1038 - 940));
vertexes[0].Position = new Vector4((50 * boxScale) + boxPosFactorX, (50 * boxScale) + boxPosFactorY, 0, 1.0f);
// other vertexes with same structure just different constants (e.g. 50 above is the constant values of that vertex.
Now here is the really weird part, the code above works as expected, but as soon as change the ratio "100/100" to anything below "99/100" or less, it behaves as if the code were:
float boxPosFactorX = (869-3)
float boxPosFactorY = (940-3)

99/100 is 0 (and 101/100 is exactly 1). That is integer arithmetic. If you want floating point arithmetic, use 99F/100.

Related

Problems with buoyancy and multiple Gertsner-waves. Waves created using Shadergraph and equations recreated in code to try and simulate floatation

Quick summation:
I am attempting to create an ocean comprised of planes that can be easily loaded and unloaded based on distance. On this ocean I want a boat to sail with a player onboard in the first person, where I want them to experience the buoyancy of their boat relative to the surrounding waves.
I am new to shadergraph and have been following several tutorials to try and create the desired effect.
These tutorials include
Catlikecoding's Wave shader
https://catlikecoding.com/unity/tutorials/flow/waves/
Zicore's Gertsner wave
https://www.youtube.com/watch?v=Awd1hRpLSoI&ab_channel=Zicore
Tom Weiland's dynamic water physics
https://www.youtube.com/watch?v=eL_zHQEju8s&ab_channel=TomWeiland
These resources have gotten me a good chunk of the way there, but I've run into some issues regarding the boat physics specifically.
I understand the math behind simulating Gertsner waves, and have tried to set up a WaveManager that calculates the y-value of a "floater" at position (x,z).
Floater.cs
public Rigidbody rigidBody;
public float depthBeforeSubmerged = 1f;
public float displacementAmount = 3f;
public int floaterCount = 1;
public float waterDrag = 0.99f;
public float waterAngularDrag = 0.5f;
private void FixedUpdate()
{
rigidBody.AddForceAtPosition(Physics.gravity / floaterCount, transform.position, ForceMode.Acceleration);
float waveHeight = WaveManager.instance.GetWaveHeight(transform.position.x,transform.position.z);
if(transform.position.y < waveHeight)
{
float displacementMultiplier = Mathf.Clamp01((waveHeight-transform.position.y) / depthBeforeSubmerged) * displacementAmount;
rigidBody.AddForceAtPosition(new Vector3(0f, Mathf.Abs(Physics.gravity.y) * displacementMultiplier, 0f),transform.position, ForceMode.Acceleration);
rigidBody.AddForce(displacementMultiplier * -rigidBody.velocity * waterDrag * Time.fixedDeltaTime, ForceMode.VelocityChange);
rigidBody.AddTorque(displacementMultiplier * -rigidBody.angularVelocity * waterAngularDrag * Time.fixedDeltaTime, ForceMode.VelocityChange);
}
}
This is pretty much lifted directly from Tom Weiland's video. Basically, when my floatpoint dips below the calculated wave, it applies force to make it travel upwards. Following his instructions carefully yielded decent results, but the problem arose when I started using Shadergraph to create my ocean.
The main issue is I wanted the waves to be tileable across multiple planes, so I used the object position and transformed it to world position to do calculations, and then added it back to the object position before manipulating the vertices of the ocean plane.
I've tried to show it below here:
This makes the ocean plane tileable and looks great, but also enlarges it in the scene quite a bit. I've put a regular plane on top to show the difference. Both are 1x1 units in the inspector.
So this is the first problem. The calculations I do in my WaveManager aren't lining up properly with the actual visual representation of the waves.
The second problem is that I can't seem to make the calculations done in WaveManager give me the correct y-coordinates.
In the shader, the waves are animated using the Time-component.
I've found the documentation to be a bit sparse on Shadergraph components, probably because I'm self taught and have a hard time wrapping my head around some of these concepts.
I've had a hard time working out how to calculate the change in y-coordinates over time in the wavemanager-script. The different solutions I've tried have just made the y-coordinate slowly grow larger into the negative range. I just have no idea how to make my calculations match up with the ones done on the GPU.
It's no important that it be super accurate, just good enough to sell the effect with small waves.
The Wavemanager code, finally.
private void Start()
{
waveLengthA = waves.GetFloat("_WaveLengthA");
waveLengthB = waves.GetFloat("_WaveLengthB");
waveLengthC = waves.GetFloat("_WaveLengthC");
waveLengthD = waves.GetFloat("_WaveLengthD");
steepnessA = waves.GetFloat("_SteepnessA");
steepnessB = waves.GetFloat("_SteepnessB");
steepnessC = waves.GetFloat("_SteepnessC");
steepnessD = waves.GetFloat("_SteepnessD");
directionA = waves.GetVector("_DirectionA");
directionB = waves.GetVector("_DirectionB");
directionC = waves.GetVector("_DirectionC");
directionD = waves.GetVector("_DirectionD");
kA = (2 * Mathf.PI) / waveLengthA;
kB = (2 * Mathf.PI) / waveLengthB;
kC = (2 * Mathf.PI) / waveLengthC;
kD = (2 * Mathf.PI) / waveLengthD;
cA = Mathf.Sqrt(Mathf.Abs(Physics.gravity.y)/ kA);
cB = Mathf.Sqrt(Mathf.Abs(Physics.gravity.y) / kB);
cC = Mathf.Sqrt(Mathf.Abs(Physics.gravity.y) / kC);
cD = Mathf.Sqrt(Mathf.Abs(Physics.gravity.y) / kD);
}
private void Update()
{
offset += Time.deltaTime;
}
public float GetWaveHeight(float x,float z)
{
fA = kA*(directionA.x * x + directionA.y * z - cA * offset);
fB = kB * (directionB.x * x + directionB.y * z - cB * offset);
fC = kC * (directionC.x * x + directionC.y * z - cC * offset);
fD = kD * (directionD.x * x + directionD.y * z - cD * offset);
position += new Vector3(x + directionA.x * steepnessA / kA * Mathf.Cos(fA),steepnessA/kA*Mathf.Sin(fA),z+directionA.y*steepnessA/kA*Mathf.Cos(fA));
position += new Vector3(x + directionB.x * steepnessB / kB * Mathf.Cos(fB),steepnessB/kB*Mathf.Sin(fB),z+directionB.y*steepnessB/kB*Mathf.Cos(fB));
position += new Vector3(x + directionC.x * steepnessC / kC * Mathf.Cos(fC),steepnessC/kC*Mathf.Sin(fC),z+directionC.y*steepnessC/kC*Mathf.Cos(fC));
position += new Vector3(x + directionD.x * steepnessD / kD * Mathf.Cos(fD),steepnessC/kD*Mathf.Sin(fD),z+directionD.y*steepnessD/kD*Mathf.Cos(fD));
return position.y;
}
The code above is quite ugly with a lot of repetition, but my plan is to make a constructor at some point to make it easier to read.
I grab all the values used in my shader, to make sure they match even if I change the look of the waves. Then I do the calculations from Catlikecoding and plot in the x- and z-coordinates of my floating object.
As far as I can understand, it should work if I just combine alle the calculated vectors, but obviously I'm missing something.
From what I've seen others do, they often opt to create custom planes with more vertices, that can cover their entire gameworld and avoid the problem, but I'm making a larger world and am worried about performance. (Though I don't know if I should be even.) I really like the fact that my ocean planes are tileable.
Does anyone here know of any solutions or help me solve the issue of worldspace vs objectspace, or how to accurately recreate the time-progression as seen in the shader?
Any help would be much appreciated.
So, for anyone struggling with this, I found the answer.
When combining multiple waves together, the manipulated plane grows in size for every wave added.
In my above question, I had somehow messed up the formulas for calculating the waves. I redid them and got the correct result.
Now, the trick is to simply divide the resultant wave, by the number of waves that you are combining. This will make sure that the actual size of the plane won't change.
You of course need to do this in your waveManager code as well. It's important to keep in mind that you only need the y-coordinate, so you only have to calculate that. For each wave, calculate the y-coordinate and then divide the combined height by the number of waves. This will make the floatation code work as it should!
Hope this helps someone out there who struggled like me.

Combine more than 3 rotations (Quaternions)

I have a 3D point and the x,y,z rotations (qInitial) for that point.
I want to rotate that point more (by some degrees that could be 0 up to 360) around y axis (qYextra). How can I calculate the final Euler rotation (qResult.eulerAngles) that is a combination of these 4 rotations (x-y-z-y)?
I have tried calculating the initial quaternion rotation, and the extra rotation to be applied. And then multiply these two quaternions. However, I get weird results (probably gimbal lock).
Code in C#. Unity.
1.Quaternion qX = Quaternion.AngleAxis(rotationFromBvh.x,Vector3.right);
2.Quaternion qY = Quaternion.AngleAxis(rotationFromBvh.y,Vector3.up);
3.Quaternion qZ = Quaternion.AngleAxis(rotationFromBvh.z,Vector3.forward);
4.Quaternion qYextra = Quaternion.AngleAxis(angle,Vector3.up);
Quaternion qInitial = qY * qX * qZ; // Yes. This is the correct order.
qY*qX*qZ has exactly the same Euler x,y,z results as
Quaternion.Euler(rotationFromBvh)
Quaternion qResult = qInitial * qYextra;
return qResult.eulerAngles;
I can confirm that the code works fine (no gimbal lock) when 4th rotation is 0 degrees (qYextra = identity). Meaning that qInitial is correct. So, the error might be due to the combination of those 2 rotations (qInitial and qYextra) OR due to the convertion from Quaternion to Euler.
EXAMPLE: (qYextra angle is 120 degrees)
RESULTS:
qInitial.eulerAngles gives these results: applying_qInitial_rotation
qResult.eulerAngles gives these results: applying_qResult_rotation
EXPECTED RESULTS:
The expected results should be like qInitial but rotated 120 degrees around y.
Any suggestions? I haven't yet found a solution, and probably I won't.
In your question, you write:
How can I calculate the final Euler rotation that is a combination of these 4 rotations (x-y-z-y)?
However, in your code, you write
Quaternion qInitial = qY * qX * qZ; // Multiply them in the correct order.
I don't know unity, but I would have expected that you would want the order of the rotations to match x, y, z, rather than y, x, z.
You stated that it works when the y-rotation is 0, in which case the place of the y-rotation in the order becomes irrelevant.
Do you get the correct result if you instead write the code below?
Quaternion qInitial = qX * qY * qZ; // Multiply them in the correct order.

scaling 4:3 points to 16:9 points

I have an array of points which all range from x(0-512) and y(0-384) which means an aspect ratio of 4:3.
If I want to display every points, perfectly, on a 16:9 monitor, what math would be needed to achieve this?
Let's say "ee" is my 4:3 point and "point" is the 16:9 point I need..
I thought since I'm trying to scale it on a 1920:1080 monitor, which is a 16:9 aspect ratio
point = new PointF(ee.x * (1920 / 512), ee.y * (1080 / 384));
But this seems to be off by abit.
Any help? Thanks.
You can't match exactly the aspect other than by multiplying each dimension by an integer. Here the only integer that would fit is 2 (cause 384 * 3 > 1080)...
so you would have to do:
point = new Point (ee.x * 2, ee.y * 2);
and you could center it with:
point = new Point (ee.x * 2 + ((1920 - 512*2)/2), ee.y * 2 + ((1080 - 384*2)/2)));
Hope that helps...
Edit: with floats, you have to take the minimum of the multiplier:
var multiplier = Math.Min(1920.0/512, 1080.0/384);
point = new Point (ee.x * multiplier + ((1920 - 512*multiplier)/2), ee.y * multiplier + ((1080 - 384*multiplier)/2)));
Could you elaborate what you mean by "off"?
If you mean that the image looks stretched horizontally, it is because the aspect ratio is larger than before. If you want the image to look like what it was before (but bigger), you'll need to scale one axis by the aspect ratio to fix it.
aspect_ratio_before = 4.0f / 3.0f;
aspect_ratio_after = 16.0f / 9.0f;
// This is equal to 4/3.
aspect_ratio_difference = aspect_ratio_after / aspect_ratio_before;
// I squish the X axis here.
point.x /= aspect_ratio_difference;
// Alternatively, you can also multiply point.y by the
// aspect_ratio_difference, though that will stretch the Y axis instead.
// Use only one!
// point.y *= aspect_ratio_difference;
Disclaimer: I have not worked with .net before, so I don't know all the details of its rendering engine. This is based off my experience working with OpenGL and scaling.

How to "round" a 2D Vector to nearest 15 degrees

I'm working on a simple game and I'm trying to simplify part of the 2D collision reaction in the game. When certain objects hit walls, I'm calculating a collision normal (collisionPoint - objectCenter) and reflecting based on that normal. I'm interested in rounding that normal vector to its nearest 15° but I'm not sure of a good way to go about that.
My current thought is doing something like this
float angle = atan2(normal.Y, normal.X) * Rad2Deg;
float newAngle = ((int)(angle + 7.5f) / 15) * 15.0f * Deg2Rad;
vector2 newNormal = vector2(cos(newAngle), sin(newAngle));
Is this a reasonable way to do it? Is there a better way?
Try this:
float roundAngle = 15 * Deg2Rad;
float angle = (float)Math.Atan2(normal.Y, normal.X);
Vector2 newNormal;
if (angle % roundAngle != 0)
{
float newAngle = (float)Math.Round(angle / roundAngle) * roundAngle;
newNormal = new Vector2((float)Math.Cos(newAngle), (float)Math.Sin(newAngle));
}
else
{
newNormal = Vector2.Normalize(normal);
}
You don't need to add 7.5, take this example:
// 4 degrees should round to 0
(4 + 7.5) / 15 == 11.5 / 15 == 0.77
// When this gets rounded up to 1 and multiplied by 15 again, it becomes 15 degrees.
// Don't add 7.5, and you get this:
4 / 15 == 0.27
// When rounded, it becomes 0 and, as such the correct answer
// Now how about a negative number; -12
-12 / 15 == -0.8
// Again, when rounded we get the correct number
actually this is more correct if you want the nearest 15 degree angle :
do this:
newangle% = INT(((angle%+7.5)/15)*15)
INT ALWAYS rounds DOWN by default this should properly give you the nearest angle in any case that is positive or negative have fun!!
and add the part where you use degree to rad and rad to degree if needed INSIDE the parens (like right next to angle% if that angle is not given in degrees then use some sort of rad2deg multiplier inside there
this is more like how you would do this in basic, with some modification It will work in c code or such, well good luck!!

What's wrong with this XNA RotateVector2 function?

I know this is probably a very simple question, but I can't seem to figure it out. First of all, I want to specify that I did look over Google and SO for half an hour or so without finding the answer to my question(yes, I am serious).
Basically, I want to rotate a Vector2 around a point(which, in my case, is always the (0, 0) vector). So, I tried to make a function to do it with the parameters being the point to rotate and the angle(in degrees) to rotate by.
Here's a quick drawing showing what I'm trying to achieve:
I want to take V1(red vector), rotate it by an angle A(blue), to obtain a new vector (V2, green). In this example I used one of the simplest case: V1 on the axis, and a 90 degree angle, but I want the function to handle more "complicated" cases too.
So here's my function:
public static Vector2 RotateVector2(Vector2 point, float degrees)
{
return Vector2.Transform(point,
Matrix.CreateRotationZ(MathHelper.ToRadians(degrees)));
}
So, what am I doing wrong? When I run the code and call this function with the (0, -1) vector and a 90 degrees angle, I get the vector (1, 4.371139E-08) ...
Also, what if I want to accept a point to rotate around as a parameter too? So that the rotation doesn't always happen around (0, 0)...
Chris Schmich's answer regarding floating point precision and using radians is correct. I suggest an alternate implementation for RotateVector2 and answer the second part of your question.
Building a 4x4 rotation matrix to rotate a vector will cause a lot of unnecessary operations. The matrix transform is actually doing the following but using a lot of redundant math:
public static Vector2 RotateVector2(Vector2 point, float radians)
{
float cosRadians = (float)Math.Cos(radians);
float sinRadians = (float)Math.Sin(radians);
return new Vector2(
point.X * cosRadians - point.Y * sinRadians,
point.X * sinRadians + point.Y * cosRadians);
}
If you want to rotate around an arbitrary point, you first need to translate your space so that the point to be rotated around is the origin, do the rotation and then reverse the translation.
public static Vector2 RotateVector2(Vector2 point, float radians, Vector2 pivot)
{
float cosRadians = (float)Math.Cos(radians);
float sinRadians = (float)Math.Sin(radians);
Vector2 translatedPoint = new Vector2();
translatedPoint.X = point.X - pivot.X;
translatedPoint.Y = point.Y - pivot.Y;
Vector2 rotatedPoint = new Vector2();
rotatedPoint.X = translatedPoint.X * cosRadians - translatedPoint.Y * sinRadians + pivot.X;
rotatedPoint.Y = translatedPoint.X * sinRadians + translatedPoint.Y * cosRadians + pivot.Y;
return rotatedPoint;
}
Note that the vector arithmetic has been inlined for maximum speed.
So, what am I doing wrong? When I run the code and call this function with the (0, -1) vector and a 90 degrees angle, I get the vector (1, 4.371139E-08) ...
Your code is correct, this is just a floating point representation issue. 4.371139E-08 is essentially zero (it's 0.0000000431139), but the transformation did not produce a value that was exactly zero. This is a common problem with floating point that you should be aware of. This SO answer has some additional good points about floating point.
Also, if possible, you should stick with radians instead of using degrees. This is likely introducing more error into your computations.

Categories