Unity aim down sights with multiple scopes - c#

so I want to make an Aim Down Sight script where the gun moves to the center of the camera. Currently, I’m using Vector3.Lerp to lerp between a “normal Vector” and an “aiming Vector” wich I set up via the editor. Now I have the problem that I have many attachments for multiple guns in my game so the “point” wich has to be in the center of the screen changes with each scope attached to the gun and I really don’t want to set up a new vector3 for each variation of guns and scopes. I tried many things like calculating the distance between the screen center and the middle of the scope and then creating a new vector3 off this distance via script but nothing worked as expected. Do you guys have any ideas or suggestions how to get this working?
I’m really thankfully for every reply!

The easiest way you can do it is to have array of positions for the camera and keep track of scope you have currently on.
public Vector3[] scopePositions;
public int scopeID;
Later, when you change the scope, just change the scopeID and when aiming down the sights, use it like this:
Camera.main.transform.position = Vector3.Lerp(normalPosition, scopePositions[scopeID], time);
Solution 2
You could set up your scope structure like this:
Scope model > Empty game object with some dummy script.
When you have it like that, you can easily set the position in code:
scopePosition = GetComponentInChildren<DummyComponent>().transform.position - Camera.main.transform.position;
or something like that.

Related

Smooth between vector3s

I'm looking for a function similar to SmoothDampAngle however that only smooths on one rotational axis. I want a way to change a Vector3 so that it changes between two points within a set amount of time. Lets say our starting point is the variable cam and our target point is the variable camOffset.
Thanks
Edit: If you know how to make cinemachine smoothly transition to a camera offset extention that is esentianlly what I want to do.

Distance between game object and ground in Unity

I recently started having a look at game development with Unity and was trying to make a simple 2D character with basic movement abilities. This character is supposed to jump and move from side to side, but only if it is standing on something.
Now my question is: How do you check if a player is standing on something? / Get the distance to the next game object / collider beneath the player game object?
Would greatly apreciate any helpful answers and especially explanations on how exactly it works. Thanks!
To do this, you need to send a ray to detect the point of impact on the ground and then calculate the distance. The code below sends a ray from the center of your object down to the maximum height (3) and gives the size.
public LayerMask groundLayer;
public float maxRayLength = 3;
public void Update()
{
var hit = Physics2D.Raycast(transform.position, Vector3.down, maxRayLength, groundLayer.value);
if (hit) Debug.Log(hit.distance); // it will print current distance from pivot
}
If you want to calculate the height of the ray from the character's foot, there are two methods, one is subtracting half the height of the character from it.
Physics2D.Raycast(transform.position-transform.up*height, ....)
Next one is to use an empty object at the base of the character, which we refer to instead of the center.
public Transform pivot;
Then..
Physics2D.Raycast(pivot, ....)
There are a few ways of actually doing this.
The most usual although a bit complicated way of doing it for a beginner is using Raycasts. A Raycast is basically a small invisible line that starts and ends where you tell it to. If anything of a specific tag or layer is caught in it's crossfire you can basically pull that object from your code. Raycasts are used for a lot of things, most notably in Shooter games to shoot or in games like Skyrim to pickup objects and interact with them.
Another way to do this, which is a bit more popular in 2D games is to create a "feet" GameObject and make it the child of the player in the hierarchy. You can add a box collider on that GameObject and check the "IsTrigger". You can add a Tag to your ground objects and through your code using the OnTriggerEnter() and OnTriggerExit() Methods you can basically tell when your character is floating on air and when he is on ground.
Another popular method is to use the Physics.OverlapBox() Method which is pretty much the same as the Trigger Method but you are creating an invisible box (much like a raycast) and instead of only getting notified when Triggered (something enters or exits) you check if the invisible box is colliding with another object/tag/collider (which could be your ground).
There are also a few different things you can do with a Nav Mesh (mostly in 3D) but I think that for now these 3 should suffice!

Unity 3d Animator Apply root motion prevent overrides transform.position assignment

Hey all I have a situation where I have a networked client whom I move with position updates and set the network client position directly:
transform.position = StateSelector.Iknstance.GetCharacter(_player.PlayerId).State.Position;
I started using an Animator for players with the flag "Apply root motion" turned on. (some animations I use do require movement)
This overrides the transform position assignment I've mentioned above.
Do you have any suggestions on how can I assign the position without it being overwritten?
So far I've managed to use CharacterController.Move() with the deltaPosition however I think there should be a better way than doing this...
Thank you all for reading this.

Is it possible to snap a prefab perpendicular to a surface?

I'm trying to build a scene with a number of prefabs placed onto a tiny planet, think something like this. The problem I am facing is that while I can place prefabs on a sphere easily using Control+Shift, they are not rotated and thus appear at terrible rotations. I'm aiming for them to be placed perpendicularly.
Currently I am aware of three solutions:
Place the objects in the scene using Control+Shift then manually rotate them into position.
Place the objects in the scene like before, then add a snippet of code to each of their Update methods: transform.rotation = Quaternion.FromToRotation(transform.up, transform.position - origin) * transform.rotation;
Like option 2, run the code and find some way to save the world state to the scene, but that is easier said than done. It seems like to much effort for something that should be trivial.
The first is tedious and hard to correctly align, the second is easy but renders your scene builder unrepresentative of your final game, and I have no idea where to start on the third. Is there a better way?
In the end I decided to perform all of the orientation manually using Control+Shift. As Johan suggested you could write your own [ExecuteInEditMode] Editor scripts, and if the planet I was foresting was significantly bigger I would have looked into it.
Be aware that if you do venture down the scripting path that the object isn't floating and fully clips through the surface. Manual placement may be necessary after all.

How do I make so when user moves the camera it doesn't go beyond the scene borders?

How do I make so when I move the camera(with touch) it doesn't go beyond the scene borders?
How do I move the camera with touch so it moves strictly with scene parts, like slides(swipe-first slide, another swipe-another slide) with not going beyond the borders of the scene?
The game I'm making has a camera like in Disco Zoo game for android (I'm a newbie)
Technically, the scene doesn't really have a border. You'll need to define the border somehow within your game, then constrain the camera's position with something like Mathf.Clamp(value, min, max) in an Update() function on the camera.
How can you define the border? It's up to you. Some ideas:
Hard-code the values in the script that clamps the camera. Probably the quickest option, but not flexible
Make public parameters on the camera script that let you set min and max positions in the X and Y directions
If you have a background image: use the extents of that to define your camera's extents
Create empty objects in your scene that define the minimum and maximum extents of the scene. Put your "min" object at the top-left, and the "max" object at the top-right. Connect it to the camera script, then use those positions to see if you've gone too far in any given direction. The main reason to do this is that it's visual.
(Slower, but dynamic) If everything in your scene uses physics, you could search the entire scene for every Collider component, then find the furthest extents in each direction. However, this is probably going to be pretty slow (so you'll only want to do it once), it'll take a while to code, and you'll probably want to tweak the boundaries by hand anyway.

Categories