AI Car Unity wrong rotation - c#

This is my first post, so if I do something wrong, let me know it. Im doing in Unity 5.5 a 2D Car Game, with a topdown view. For the game, i need that some cars have a simple AI Script, just for follow a path. So, I searched in the web something that fits my needs, and found(in my opinion) a really good implementation. Here i let u the project in GitHub, uploaded by the owner of the implementation I suppose: https://github.com/mindcandy/lu-racer
So in a few words, we create a Path with empty GameObjects and the respective colliders, we have to have a car(obviously) and set up with a collider and a rigidbody2d, and we have three scripts that "do the magic". For my project, I remove the part of counting the laps because my game is not a race. Well, i set up all and everything works fine! Except for one thing: the carĀ“s rotation.
I think this is happens because the tutorial game is a topdown view but cars go from left to right, and in my game is topdown also but cars go from down to up. So the cars's sprites in my project are in topdown view and looking to up, and the car's sprite of the github project are topdown view looking to right(i wanted upload the images but appears me that i cant do it due my reputation, sorry).
Definitely, the AI works really good and the car follow without problems the path, but with a wrong rotation, like this: aicar_rotation
I tried changing things in the AICarMovement scripts related to the vectors, and angles, but without luck, so if anyone can look and give me a hand, I'll be grateful. If anyone wants more details for understand the problem, let me know. I try upload more things like pictures or gifs to show the problem, but i cant do it due the reputation.
This is the part of code in AICarMovement that i think i have to change:
public class AICarMovement : MonoBehaviour {
public float acceleration = 0.5f;
public float braking = 0.3f;
public float steering = 4.0f;
private Rigidbody2D rigidb;
Vector3 target;
void Start() {
rigidb = GetComponent<Rigidbody2D>();
}
public void OnNextTrigger(TrackLapTrigger next){
target = Vector3.Lerp(next.transform.position - next.transform.right, next.transform.position + next.transform.right, Random.value);
}
void steerTowardsTarget() {
Vector2 towarNextTrigger = target - transform.position;
float targetRot = Vector2.Angle(Vector2.right, towarNextTrigger);
if(towarNextTrigger.y < 0.0f) {
targetRot = -targetRot;
}
float rot = Mathf.MoveTowardsAngle(transform.localEulerAngles.z, targetRot, steering);
transform.eulerAngles = new Vector3(0.0f, 0.0f, rot);
}
void FixedUpdate(){
steerTowardsTarget();
float velocity = rigidb.velocity.magnitude;
velocity += acceleration;
rigidb.velocity = transform.right * velocity;
rigidb.angularVelocity = 0.0f;
}
}
Sorry about my english, is not my native language.

Turn your sprite and also the spawn game object. This way you have no fancy behaviour and all looks correct.

In steerTowardsTarget(), 2nd line:
Try to change the first parameter of Angle() from Vector2.right to Vector2.up.

Related

When making the camera follow a ball, which one should control the camera position? The ball or the camera itself?

I am learning Unity3D and now creating a trivial (useless) game as follows.
The ball rolls down the inclined floor and the camera must follows the ball with the following relationship
x camera = x ball
y camera = y ball + 3
z camera = z - 10
There are two possible ways to control the camera position.
The ball controls the camera
In this scenario, I attach the following script to the ball.
public class Ball : MonoBehaviour
{
[SerializeField]
private Transform cameraTransform;
void Start() { }
void Update()
{
Vector3 newCameraPos = new Vector3
{
x = transform.position.x,
y = transform.position.y + 3f,
z = transform.position.z - 10f
};
cameraTransform.position = newCameraPos;
}
}
The camera controls itself
In this scenario, I attach the following script to the camera.
public class Camera : MonoBehaviour
{
[SerializeField]
private Transform ballTransform;
void Start() { }
void Update()
{
Vector3 newCameraPos = new Vector3
{
x = ballTransform.position.x,
y = ballTransform.position.y + 3f,
z = ballTransform.position.z - 10f
};
this.transform.position = newCameraPos;
}
}
Question
Even though both methods work as expected, I am wondering whether there are any pros and cons for each method. Which one should I use?
As you've already mentioned, both examples work as expected.
What I like to do though, is assign functionality to the object that is responsible for performing the 'action'. In this case the camera is 'following' something. At the moment, it is following the ball, but later if you wanted to make it follow something else, would it make sense to have to navigate to your ball gameobject to change that behaviour? I think not.
By assigning functionality to objects based on 'responsibilities' you will often find that your code ends up being much more modular in the long run.
Of course this sort of practice is nothing new to game development, or software development at all. It complements the Single Responsibility Principle and shares many of its qualities.
But, at the end of the day, if you're working on your code alone, then you will know the codebase inside out. So it's up to you really!
I would also suggest creating Components based off of those responsibilities whenever possible. So instead of having one generic Camera component, I would create a FollowTarget component and attach that to the camera. In doing so, you will have enabled the ability to use that very same Component to make some other, arbitrary object follow another arbitrary object in your game.
Happy learning!

How to make camera relative movement

I'm learning unity and c#, and want to make my movement to be camera relative movement instead of world relative movement. How do I do that?
I'm learning unity and c#, my unity version is 2018.3.12f1. I would be happy for help.
just to let know, instead of moving the cam I'm rotating the player.
void Update()
{
float AxisY = Player.transform.eulerAngles.y;
/* Movement starts here */
Vector3 Movement = new Vector3 (Input.GetAxis("Horizontal"), 0, Input.GetAxis("Vertical"));
if (Input.GetKey(KeyCode.LeftShift) || Input.GetKey(KeyCode.RightShift)) { //running code
Player.transform.position += Movement * running_speed * Time.deltaTime;
} else {
Player.transform.position += Movement * speed * Time.deltaTime;
}
/*Movement ends here */
/* Rotation controller starts here */
Quaternion target = Quaternion.Euler(Player.transform.eulerAngles.x, Player.transform.eulerAngles.y, Player.transform.eulerAngles.z);
/*if (Player.transform.eulerAngles.x != 0 || Player.transform.eulerAngles.z != 0 || Player.transform.eulerAngles.y != 0) {
Player.transform.rotation = Quaternion.Euler(0,0,0);
}*/
if (Input.GetKey(KeyCode.E))
{
Debug.Log("E got pressed");
//float AxisYPositive = Player.transform.eulerAngles.y;
AxisY = AxisY+1;
Player.transform.rotation = Quaternion.Euler(0, AxisY, 0);
} else if (Input.GetKey(KeyCode.Q))
{
Debug.Log("Q got pressed");
//float AxisYNegetive = Player.transform.eulerAngles.y;
AxisY=AxisY-1;
Player.transform.rotation = Quaternion.Euler(0, AxisY, 0);
}
}
}
The player's movement is world relative, how to make the movement camera relative?
If you want to make the movements relative to the gameObject, call the method Transform.Rotate() on the transform of the gameObject you want to rotate rather than modifying its Quaternion directly. Just make sure the final argument is set to Space.Self.
if (Input.GetKey(KeyCode.E))
{
Debug.Log("E got pressed");
//float AxisYPositive = Player.transform.eulerAngles.y;
AxisY = AxisY+1;
Player.transform.Rotate(Quaternion.Euler(0, AxisY, 0), Space.Self);
}
In general you don't want to directly mess with objects transform.rotation, at least not unless you at least somewhat understand quaternions (I don't!).
I can see a few issues with your code, but the common thread seems to be that you don't really understand how transforms work. Specifically, you might want to look into World/Local space.
The usual way to control a player goes roughly like this:
void DoMovement(Transform player)
{
//If you move first your controls might feel 'drifty', especially at low FPS.
Turn(player);
Move(player);
}
void Turn(Transform player)
{
float yaw = Input.GetAxis("Yaw") * time.deltaTime; //Aka turn left/right
player.Rotate(0, yaw, 0, Space.Self);
// Space.Self is the default value, but I put it here for clarity.
//That means the player will rotate relative to themselves,
//...instead of relative to the world-axis, like in your code.
}
You didn't ask about movement, but as-is your character will always move relative to the world. The below should make it move relative to the camera.
Transform _cameraTransform; //Assumes this is set druing Start()
void Move(Transform player)
{
var forwardMove = _cameraTransform.Forward; //Get whatever direction is 'forward' for camera
forwardMove.Y = 0; //Don't want movement up and down.
forwardMove = forwardMove.normalized; //Normalize sets the 'power' of the vector to 1.
//If you set Y to 0 and don't normalize you'll go slower when camera looks down
//...than when camera is flat along the plane
player.position += forwardMove * Input.GetAxis("Vertical") * time.deltaTime;
//Here you could do the same for strafe/side to side movement.
//Would be same as above, but using the transform.right and Horizontal axis
}
Now, I'm making some assumptions here since you haven't specified what kind of game it is and what kind of controls you want. I'm assuming you have a character running around on a mostly flat plane (no aircraft/spaceship controls), and that the camera is attached to the player. This might not not actually be the case.
In any case I advice you to check out the tutorials, especially the Roll-a-Ball tutorial which I have found is good for beginners to get a grasp on basic players controls that are not just world-relative. The other tutorials, too, are pretty good if you think they're interesting.
Aside from the official Unity tuts a ton of decent to amazing tutorials out there, including video tutorials, so for something like this you could just search for <game type> tutorial and pick whatever seems good to you. While getting started I advice you to avoid the shortest videos, as you will likely benefit greatly from explanation that only fits in longer videos. Of course, that doesn't mean you should pick the longest videos either.
In case someone needs to move an object and don't care about colliders, you can use transform.Translate and assign to his second parameter relativeTo your camera (or any transform) to automatically calculate the translation relative to the object assigned.

I have no syntax errors in Unity, How can I find errors so I learn why my crosshair stays in the center of the screen?

For Unity Game: shouldn't the cross-hair move accordingly as the player rig does to point at enemies? how can I find error with no syntax error messages?
I tried visiting Unity forums I found support about raycasting followed directions add some code to the crosshair.cs script I receive no syntax errors.
public class CrossHair: MonoBehaviour{
[SerializeField] private GameObject standardCross;
[SerializeField] private GameObject redCross;
float moveForce = 1.0f;
float rotateTorque = 1.0f;
float hoverHeight = 4.0f;
float hoverForce = 5.0f;
float hoverDamp = 0.5f;
Rigidbody rb;
private RaycastHit raycastHit;
void Start()
{
standardCross.gameObject.SetActive(true);
rb = GetComponent<Rigidbody>();
// Fairly high drag makes the object easier to control.
rb.drag = 0.5f;
rb.angularDrag = 0.5f;
}
void Update()
{
// Push/turn the object based on arrow key input.
rb.AddForce(Input.GetAxis("Vertical") * moveForce * transform.forward);
rb.AddTorque(Input.GetAxis("Horizontal") * rotateTorque * Vector3.up);
RaycastHit hit;
Ray downRay = new Ray(transform.position, -Vector3.up)
if (Physics.Raycast(downRay, out hit))
{
//The "error" in height is the difference between the desired height
// and the height measured by the raycast distance.
float hoverError = hoverHeight - hit.distance;
// Only apply a lifting force if the object is too low (ie, let
// gravity pull it downward if it is too high).
if (hoverError > 0)
{
// Subtract the damping from the lifting force and apply it to
// the rigidbody.
float upwardSpeed = rb.velocity.y;
float lift = hoverError * hoverForce - upwardSpeed * hoverDamp;
rb.AddForce(lift * Vector3.up);
}
}
Ray targettingRay = new Ray(transform.position, transform.forward)
if (Physics.Raycast(targettingRay, out raycastHit, 100))
{
if (raycastHit.transform.tag == "Enemies")
{
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
redCross.gameObject.SetActive(true);
standardCross.gameObject.SetActive(false);
}
}
else
{
redCross.gameObject.SetActive(false);
standardCross.gameObject.SetActive(true);
}
}
}
I except for the crosshair in my game to follow the player rig camera as the code explains. Any guidance is appreciated.
Syntax errors are those kind of errors which prevent your game/program from running at all. They are like grammar mistakes that need to be fixed before you can run your code.
A syntax error is your pc telling you:
I can't run this because I don't understand this. or I can't run this because this is not allowed.
One example would be int x = "Hello World". This in not allowed because I can not assign an string value to an integer (like that).
But your code not having any syntax errors does not mean it will do what you intended it for, it just means your code will run.
A good and easy way of debugging your code in Unity is to add Debug.Log("My Log Message"); statements to your code where you think it would be beneficial. These are going to be logged to your console output in Unity while you are in play mode. You can for example do something like this to constantly get your cross's position and rotation logged:
Debug.Log("Cross Position: " + standardCross.transform.position);
Debug.Log("Cross Rotation: " + standardCross.transform.rotation);
Just be sure to remove them once you are done with them because having Debug.Logs in your code takes as significant hit to your performance.
Another, more sophisticated way of debugging your code is through the usage of breakpoints in Visual Studio or whatever IDE/Editor you are using.
It basically comes down to declaring points where your program should pause execution for you to look at values in your program itself.
It's quite handy but goes above and beyond what I could tell you via this text, so please have a look at this Unity-specific use case: Debugging Unity games with Visual Studio
Now to your code:
First: There is a Vector3.down so you don't have to use -Vector3.up.
Second: Do you need a GameObject as crosshair? Why not just add a UI-Crosshair instead?
That way it always stays in the middle of your screen wherever you turn your camera.
Just add a new Image to your UI via GameObject -> UI -> Image, give it some kind of crosshair look in the inspector and lock it to the middle of the screen by left-clicking on the little crosshair in the top left of the Inspector while you have your Image selected and then Shift + Alt + Left click the middle option.
If you really want to use a separate gameObject as crosshair then maybe attatch it to your player object as a child. That way it will move and rotate with your player automatically and you do not have to do this via script.
Hope this helps!

Calculating/Predicting a way

I'm just starting with physics, so I'm not always sure about what I'm doing. It's a 2D project but I'm using 3D physical objects like SphereCollider etc..
What I have:
Objects floating in space and affecting each other through gravity:
protected virtual IEnumerator OnTriggerStay(Collider other) {
yield return new WaitForFixedUpdate();
if(other.attachedRigidbody) {
Vector3 offsetVector = this.transform.position - other.transform.position;
float distance = offsetVector.magnitude;
float gravityForce = (other.rigidbody.mass * mass) / Mathf.Pow(distance, 2);
// Clamp gravity.
if(gravityForce > 1.0F) {
gravityForce = 1.0F;
}
other.attachedRigidbody.constantForce.force = offsetVector.normalized * gravityForce;
}
}
There are controllable objects on which the player can click and drag a line away from the object in order to give it a force (shoot) in the opposite direction.
What I want to achieve:
The player should see a rough prediction of the way while aiming. That means that the way-prediction needs to take in account the current velocity, the force which would be applied when the player release the mouse button and the gravity of the surrounding objects.
What I have tried so far:
For testing purposes I just save the computed/predicted positions in an array and draw those positions in OnDrawGizmos().
I wrote a method which returns the gravity influence for a certain position called computeGravityForPosition(Vector3 position).
And thats how I try to calculate the positions:
private void drawWayPrediction() {
Vector3 pos = this.transform.position;
// The offsetVector for the shooting action.
Vector3 forceVector = pos - Camera.main.ScreenToWorldPoint(Input.mousePosition);
forceVector.z = 0.0F;
// The predicted momentum scaled up to increase the strength.
Vector3 force = (forceVector.normalized * forceVector.magnitude);
// 1. I guess that this is wrong, but don't know how to do it properly.
momentum = this.rigidbody.velocity + force;
for(int i = 0; i < predictionPoints.Length; i++) {
float t = i * Time.fixedDeltaTime;
momentum += computeGravityForPosition(pos);
pos += momentum * t * t;
predictionPoints[i] = pos;
}
}
At the beginning, when the objects just slowly approaching each other it looks okay. After the first shot, the prediction is completely wrong. I guess it is because of 1. in the code. Just adding the force to the velocity is probably horrible wrong.
Thank you very much for your time.
EDIT:
I removed seemingly unnessecary parts.
I still think that the main problem lays in 1. in the code. I just don't know how to mix up the current movement of the object (from which I only have the current velocity as far as I know the physics engine of unity) with the new created force:
Vector3 forceVector = pos - Camera.main.ScreenToWorldPoint(Input.mousePosition);
Vector3 force = (forceVector.normalized * forceVector.magnitude);
So if you are using a new version of unity probably above 2018, you can use the nice method
Physics.Simulate(dt); // delta time, dt, is the amount of time to simulate.
https://docs.unity3d.com/ScriptReference/Physics.Simulate.html
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/PhysicsScene.Simulate.html
By using this function you can manually advance the simulation.
This method should be applied to a different physics scene.
Therefore I suggest that when you click you will simulate a few physics steps (the more you will simulate the more accurate indication the player will get),
with every step you store the position of the object and when you are done simulating draw a line between all the points.
In my opinion, it should run quite fast if done correctly.
The code should look something like this:
public PhysicsScene physicsScene;
GameObject actualBall;
GameObject simulatedBall;
OnClick() {
simulatedBall.SetPosition(actualBall.transform.position);
if (!physicsScene.IsValid())
return; // do nothing if the physics Scene is not valid.
for (int i=0; i < 10; i++) {
physicsScene.Simulate(Time.fixedDeltaTime);
// store the position.
myPoints.append(simulatedBall.rb.position);
}
// draw a line from the stored points.
}
In addition there is this video that I hope will help, good luck
https://www.youtube.com/watch?v=GLu1T5Y2SSc
I hope I answered your question and if not tell me :)
Disclaimer : Unfortunately I suck at math so can't provide any code for the calculations.
Now that the legal stuff is out of the way :)
In my opinion you are looking at this all wrong. What you need is to calculate the curve (path of the objects trajectory) and then simply plot the curve in OnDrawGizmos with a line renderer.
You don't need to simulate the behaviour of the object. Not only is this a LOT faster but it's also simpler in terms of TimeScale shenanigans. By changing the TimeScale you are also affecting the TimeScale of your trajectory simulation which will most likely look and feel weird.
By doing a basic trajectory calculation you will not have this issue.
PS: This link might help.

Unity C# How to turn item in hands when player turns

I am pretty new to unity but i have a basic FPS game made, when holding a gun, i would like to make it so when your player turns, the item in hands rotates to show turning. For example, when playing call of duty, the gun rotates when you rotate your character. This is the code i have but it is not working
void Update(){
this.rotateEquppedOnTurn();
}
private void rotateEquppedOnTurn(){
if(this.equippedItem != null){
InteractEquppableItem equip = this.equippedItem.gameObject.GetComponent<Interaction>() as InteractEquppableItem;
if(equip.rotatesWhenTurn){
float rotX = Input.GetAxis("Mouse X");
float rotY = Input.GetAxis("Mouse Y");
Quaternion tempRot = new Quaternion();
Quaternion tempCam = GameObject.Find("PlayerCamera").transform.rotation;
tempRot.x = tempCam.x + rotX;
tempRot.y = tempCam.y + rotY;
tempRot.z = tempCam.z;
this.equippedItem.gameObject.transform.rotation = tempRot;
}
}
}
when turning the character with this code, the gun just rotates in a weird way, its not quite what i expected from the rotation script
Quaternions are not vectors.
I suggest you start by watching the vector tutorial on Unity's web site.
The last bit of the tutorial goes over what cross products are and why you would use them - specifically, you can use them to obtain a relative axis around which you may want to rotate something.
Don't directly assign rotation like this.
this.equippedItem.gameObject.transform.rotation = tempRot;
instead of that use something like this
this.equippedItem.gameObject.transform.Rotate(new Vector3(x,y,z));
you can derive x,y,z using mouse motion

Categories