I created a Unity3d project and I used some spotlights behind objects to get their shadows. I'm trying to get the real size (using my scale) of the shadow once reflected on the floor. Is there a way to do that?
I do believe your question would mostly belong to Mathematics Stack Exchange but here's an approach that I hope will lead you to the right direction.
The hypotheses I took here are:
you know your object height when scale = 1
your object isn't too large on its top (or you will have to include half its width to the maths)
your object pivot is placed at it's base (on a human : under its feet)
your object is placed on the floor (and therefore not in the air: otherwise it's a bit more complicated to calculate but the idea remains the same)
Here's a quick schema of the situation:
Now you can calculate your shadow's size using something like this:
Vector3 topPoint = YOUR_OBJECT.transform.position + YOUR_OBJECT.transform.lossyScale.y * YOUR_OBJECT_HEIGHT;
Vector3 lightFlatPoint = new Vector3(YOUR_LIGHT.transform.position.x, topPoint.y, YOUR_LIGHT.transform.position.z);
float lightDeltaY = YOUR_LIGHT.transform.position.y - topPoint.y;
float lightFlatToTopPointDistance = Vector3.Distance(lightFlatPoint, topPoint);
float shadowSize = ((YOUR_OBJECT.transform.lossyScale.y * YOUR_OBJECT_HEIGHT) / lightDeltaY) * lightFlatToTopPointDistance;
Hope this helps,
Related
I have my Building System almost finished; the problem is when I debugged my Physics.OverlapBox(), by using a Gizmos.DrawWireCube() with the code shown below:
void OnGizmosDraw()
{
Gizmos.matrix = Matrix4x4.TRS(transform.position + boundingOffset, transform.rotation, boundingExtents);
Gizmos.DrawWireCube(Vector3.zero, Vector3.one);
}
I saw the problem but I will show it in an image, since I am kind of struggling to explain.
I have thought of a solution but I could not find the equations and what method it is. Basically, there is an Original Vector to which an Offset will be added (if defined) and have it's Final Vector affected by Rotation. It is explained more in another.
Maybe it is a trigonometry based calculation. I hope you could find how to do it.
Thanks in advance.
Ok if I understand you correctly this time the issue is that your offset works as long as the transform isn't scaled or rotated.
The reason is that
transform.position + boundingOffset
uses plain worldspace coordinates. As you can see the offset from orange to red dot is exactly the same (in world coordinates) in both pictures.
What you rather want is an offset relative to the transform position and scale. You can convert your boundingOffset to local coordinates relative to the transform by using TransformPoint
Transforms position from local space to world space.
So instead use
Collider[] colliders = Physics.OverlapBox(transform.TransformPoint(boundingOffset), boundingExtents / 2, blueprint.transform.rotation);
What it does is basically
transform.position
+ transform.right * boundingOffset.x * transform.lossyScale.x
+ transform.up * boundingOffset.y * transform.lossyScale.y
+ transform.forward * boundingOffset.z * transform.lossyScale.z;
Alternatively
A bit more maintainable approach I often use in such a case is not providing boundingOffset via a hardocded/field vector but instead simply use a child GameObject e.g. called OffsetPivot and place it in the scene however you want. Since it is a child of your transform it will always keep the correct offset when you rotate/scale/move the parent.
Then in the code I would simply do
[SerializeField] private Transform boundingOffsetPivot;
...
Collider[] colliders = Physics.OverlapBox(boundingOffsetPivot.position, boundingExtents / 2, blueprint.transform.rotation);
Visual aid here
Please see visual aid. Following the gif, there are two GUIs. The GUI on the left is scaling properly based on the cameras distance to the object. The GUI on the right is scaling, seen clearly in the 'Scene' view, but not even noticeable in game view. The application is for the Hololens as the users head controls the camera within the Unity scene.
The problem lies in setting the objects scaling. Been racking my brain all day trying to replicate the type of scaling shown on the left GUI. Dividing the distance and using this as the factor to scale by is clearly incorrect as the object is either far too large and grows too fast, or far too small and grows too slowly but I don't know how to meet somewhere in the middle. Scaling in both directions equally at a noticeable yet not obnoxiously large or imperceptible rate.
Implementation code thus far:
// GameObject and GUI assigned in Editor
public GameObject sphere, rotationManager;
// For rotationManager scaling
private float cameraToObject,
void Update()
{
/* Scale rotationManager based on Cameras distance to object */
cameraToObject = Vector3.Distance(
sphere.transform.position,
Camera.main.transform.position);
Debug.Log("distance: " + cameraToObject);
// If camera isn't too close to our object
if (cameraToObject > 4f)
{
// adjust scale
rotationManager.transform.localScale = new Vector3(
cameraToObject / 5,
cameraToObject / 5,
rotationManager.localScale.z);
// enable rotationManager
rotationManager.gameObject.SetActive(true);
}
// Camera is too close, turn off GUI
else
rotationManager.gameObject.SetActive(false);
}
Not really sure what you are doing wrong, as dividing distance by constant must give required effect. Anyway try this code:
using UnityEngine;
public class AutoSize : MonoBehaviour
{
public float FixedSize = .005f;
public Camera Camera;
void Update ()
{
var distance = (Camera.transform.position - transform.position).magnitude;
var size = distance * FixedSize * Camera.fieldOfView;
transform.localScale = Vector3.one * size;
transform.forward = transform.position - Camera.transform.position;
}
}
It seems to work fine - preview
Also maybe it will be more effective to create constant menu size and just position it by converting world space to the screen space.
I'm creating a 3D game with XNA, and I'm stuck when I try to attach correctly a weapon to a bone.
You can see the result below :
The Image
So, the weapon follow the right hand, I just need to find a good rotation but this is not my problem.
My problem is : When I scale down the machete, its Y value decreases... Indeed, I think that XNA sets the position thanks to the center of mass of the mesh... I wanted to know if is it possible to modify this center of mass ? to draw my machete from the handle (I don't know if we can say that in this context).
I hope you will understand what my problem is :)
I apologize for my bad english, see you :)
When drawing the weapon you should do few matrix multiplicatins.
1) places the blade in center origin
2) resizes it
3) rotates it against local axis
4) places the blade in the wanted location.
The order of the multiplication matters! Multipliying in a different order will yield anoying results and displacements of the model.
Matrix transform = fixPosMat * Matrix.Scale(1.5f) * rotationMat * PositionMat;
I'm not sure to understand...
I draw my model thanks to this matrix :
boneLocal = Matrix.Invert(skinnedModel.SkeletonBones[index].InverseBindPoseTransform)
* animationController.SkinnedBoneTransforms[index]
* Matrix.CreateScale(scale)
* _rotation
* Matrix.CreateTranslation(_position);
I'm currently using value noise I believe it is called in combination with fractal Brownian motion to generate terrain in a 3D environment. I generate a 2D height-map from the noise, with values ranging between -1 and +1. I typically just multiply that return value by 10 or so to set the height and that generates rolling hills.
What I'd like to do is somehow combine calls to the algorithm to have some areas very hilly, others quite flat, while some are nearly mountainous. How do you go about something like that without having edges between areas be extremely obvious (like jutting cliffs segregating them)?
Edit: This is needed to be completely procedural in a near infinite environment.
Based on ananthonline's answer, I'm taking a low frequency call to my noise generator as the mask. With two biome types, I take the 'impact' of the biome, subtract the absolute value of the mask value minus the 'location' on the mask, and divide that whole value by the 'impact' value again.
(Impact- abs(Mask - Location)) / Impact
That gives me a value that when positive, I can multiply towards the return value of a specific noise call with specific frequencies, amplitudes, etc (such as rolling hills or mountains or ocean).
The primary issue here is that if my mask returned values of 0 through 1 as an example, I'd need to (in a two biome scenario) set one biome's location to .25 and the other to .75 with a strength of .5 each for them to blend together properly. At least, if I wanted even distribution of each biome type AND to blend them together properly. I'd struggle quite a bit if I wanted, say, mountains to show up twice as often as rolling hills.
I do so hope that makes sense. The math works out fantastically, but I certainly don't think I'm explaining it well with my limited mathematics background and knowledge. Maybe some code would help (if my cruddy uncommented code means something to someone), note that GetNoise2d returns values between 0 and 1:
float GetHeight(float X, float Z)
{
float fRollingHills_Location = .25f, fRollingHills_Impact = .5f, fRollingHills = 0;
float fMountains_Location = .75f, fMountains_Impact = .5f, fMountains = 0;
float fMask = GetNoise2d(0, X, Z, 2, .01f, .5f, false, false);
float fRollingHills_Strength = (fRollingHills_Impact - Math.Abs(fMask - fRollingHills_Location)) / fRollingHills_Impact;
float fMountains_Strength = (fMountains_Impact - Math.Abs(fMask - fMountains_Location)) / fMountains_Impact;
if (fRollingHills_Strength > 0)
fRollingHills = fRollingHills_Strength * (GetNoise2d(0, X, Z, 2, .05f, .5f, false, false) * 10f + 25f);
if (fMountains_Strength > 0)
fMountains = fMountains_Strength * (GetNoise2d(0, X, Z, 2, .1f, .5f, false, false) * 25f + 10f);
return fRollingHills + fMountains;
}
If a problem with the above code needs to be specified, then let it be that to do this with say 10 different biomes would require some extreme thought to be put into the exact 'location' and 'impact' values so ensure the blending is flawless. I'd rather code it in such a way to ensure that's already taken care of.
How about generating low frequency noise (white-black) in a texture that is 1/8th or smaller than the terrain texture, blurring it and then using this as the mask to blend the two heightmaps together (perhaps as part of the rendering algorithm itself)?
Note that you can also paint this "blending" texture by hand, allowing fine control of cliffs vs smooth transition areas (sharper edges vs blurry edges).
I thought it would be simple as:
Vector3 point = Vector3.Transform(originalPoint, worldMatrix);
But apparently not... It make's the point's numbers shoot into the thousands.
Basically, what I'm trying to do is creating a collision system and every two points in this system is a line, so basically I want to collide lines. I want the lines to be able to scale, rotate, and translate based on a world matrix (so that the collision lines are in tune with the object's scale, rotation, and translation).
I've been trying for hours now and I can't seem to figure it out. I've tried multiplying by the View Matrix as well and while that is the closest to what I want, it seems to switching between two sets of numbers! It would be perfect if it stayed with the one set, I have no idea why it keeps changing...
Any help, please? :(
Edit: To add a little, I'm constantly updating the points in an Update call. But I don't know if that would change anything, either way the points = originalpoints first.
Steve H:
One line would have two points, so:
originalPoint[0] = new Vector3(-42.5f, 0f, 0f);
originalPoint[1] = new Vector3(42.5f, 0f, 0f);
point[0] = Vector3.Transform(originalPoint[0], worldMatrix);
point[1] = Vector3.Transform(originalPoint[1], worldMatrix);`
At first, point[0] & [1] equals the same as originalPoint[0] & [1]. But, the moment I move my player even just a few pixels...
point[0] = (-5782.5f, 0f, 0f)
point[1] = (-5697.5, 0f, 0f)
The player's position is -56.0f.
My worldMatrix goes as:
_world = Matrix.Identity // ISROT
* Matrix.CreateScale(_scale) // This object's scale
* Matrix.CreateFromQuaternion(_rotation) // It's rotation
* Matrix.CreateTranslation(_offset) // The offset from the centre
* Matrix.CreateFromQuaternion(_orbitRotation) // It's orbit around an object
* _orbitObjectWorld // The object to base this world from
* Matrix.CreateTranslation(_position); // This object's position
The objects display properly in graphics. They scale, rotate, translate completely fine. They follow the orbit's scale, rotation, and translation too but I haven't tested orbit much, yet.
I hope this is enough detail...
Edit: Upon further research, the original points are also being changed... :| I don't get why that's happening. They're the exact same as the new points...
I figured out my problem... -_-
So, after I create the line points, I do this at the end:
originalLines = collisionLines;
collisionLines & originalLines are both Vector3[] arrays.
I guess just by making one equal the other, it's like they're the exact same and changing one changes the other... that is something I did not know.
So I made this function:
void CreateOriginalPoints()
{
_originalPoints = new Vector3[_collisionPoints.Length];
for (int i = 0; i < _collisionPoints.Length; i++)
_originalPoints[i] = _collisionPoints[i];
}
And this solves the problem completely. It now makes complete sense to me why this problem was happening in the first place.
Thanks a lot Donnie & Steve H. I know you two didn't answer my question but it got me to poke around even deeper until I found the answer.