I want many GameObjects on the scene to have the Y position "animated" programmatically. I have a class, let's call it "TheGameObject", which doesn't inherit from MonoBehaviour and as such can't have the Update() function that I need to achieve the Y movement of the GameObjects.
I decided to try using a delegate but then a problem came up: I can pass only one Transform to the delegate.
Here's the code for the delegate in a static class, let's call it "staticClass", that derives from MonoBehaviour:
public delegate void UpdatingEventHandler(Transform t);
public static event UpdatingEventHandler Updating;
void Update() {
if(Updating != null)
Updating(/*Transform that should be passed*/);
}
And this is the code in the "TheGameObject" class:
public GameObject gameObject { get; set; }
private void spawn() {
GameObject go = new GameObject();
staticClass.Updating += animateYpos(go.transform);
}
void animateYpos(Transform t) {
//modify t.position.y
}
Is there a way to pass each respective Transform to the delegate in order to call Updating() in the static class Update() and have all the GameObjects to move respectively?
The problem isn't the type of the parameter that is passed but which Transform is passed so that each different Transform will have its own Y position modified.
This is sort of totally wrong, Cress!
It's very common for experienced developers to not really understand that Unity
is not object oriented, and has utterly no connection - at all - to concepts like inheritance.
(Sure, the programming language currently used for writing components in Unity, happens to be OO, but that's largely irrelevant.)
Good news though, the solution is incredibly simple.
All you do in Unity is write behaviors that do what you want.
Essay about it ... https://stackoverflow.com/a/37243035/294884
Try to get this concept: say you have a "robot attack device" in your game.
So, the only thing in Unity scenes is GameObjects. (There is nothing else - at all.)
Your "robot attack device" would have these behaviors. ("Behaviours" meaning components attached to it. Componantes are MonoBehavior.)
"robot attack device"
• animate Z position
• shoot every ten seconds
• respond to bashes with sound effect
and so on.
Whereas, your "kawaii flower" might have these behaviors
"kawaii flower fixed NPC"
• shoot every 5 seconds
• respond to bashes with sound effect
• rainbow animation
In this case you're asking how to write a "animate Y" behavior.
It's very easy, first note that everything - everything - you do in Unity you do with an extension, so it will be like
public static void ForceY( this Transform tt, float goY )
{
Vector3 p = tt.position;
p.y = goY;
tt.position = p;
}
Quick tutorial on extensions: https://stackoverflow.com/a/35629303/294884
And then regarding the trivial component (all components are trivial) to animate Y, it's just something like
public class AlwaysMoveOnYTowards:MonoBehaviour
{
[System.NonSerialized] public float targetY;
void Update()
{
float nextY = .. lerp, or whatever, towards targetY;
transform.ForceY(nexyT);
}
}
Note that, of course, you just turn that component on and off as needed. So if the thing is asleep or whatever in your game
thatThing.GetComponent().enabled = false;
and later true when you want that behavior again.
Recall, all you're doing is making a model for each thing in yoru game (LaraCroft, dinosaur, kawaiiFlower, bullet .. whatever).
(Recall there are two ways to use models in Unity, either use Unity's prefab system (which some like, some don't) or just have the model sitting offscreen and then Instantiate or use as needs be.)
{Note that a recent confusion in Unity (to choose one) is: until some years ago you had to write your own pooling for things that you had a few of. Those days are long gone, never do pooling now. Just Instantiate. A huge problem with Unity is totally out-of-date example code sitting around.}
BTW here's the same sort of ForceY in the case of the UI system
public static void ForceYness(this RectTransform rt, float newY)
{
Vector2 ap = rt.anchoredPosition;
ap.y = newY;
rt.anchoredPosition = ap;
}
Finally if you just want to animate something on Y to somewhere one time, you should get in to the amazing
Tweeng
which is the crack cocaine of game engineering:
https://stackoverflow.com/a/37228628/294884
A huge problem with Unity is folks often don't realize how simple it is to make games. (There are many classic examples of this: you get 1000s of questions asking how to make a (totally trivial) timer in Unity, where the OP has tied themselves in knots using coroutines. Of course, you simply use Invoke or InvokeRepeating 99% of the time for timers in Unity. Timers are one of the most basic parts of a game engine: of course, obviously unity through in a trivial way to do it.)
Here's a "folder" (actually just a pointless empty game object, with these models sitting under it) holding some models in a scene for a game.
those are some of the enemies in the game. As an engineer I just popped them in there, the game "designer" would come along and set the qualities (so, the potato is fast, the turnip is actually an unkillable boss, the flying head connects to google maps or whatever).
Same deal, this is from the "folder" (aside, if you use an empty game object as a, well, folder to just hold shit, it's often referred to as a "folder" - it's no less a game object) with some "peeps" ("player projectiles"). Again these are the very models of these things to be used in the game. So obviously these would just be sitting offscreen, as you do with games. Look, here they are, literally just sitting around an "-10" or something outside of the camera frustrum (a good place to remember a Unity "camera" is nothing more than ............. a GameObject with certain components (which Unity already wrote for our convenience) attached.)
(Again: in many cases you may prefer to use prefabs. Unity offers both approaches: prefabs, or "just sitting offscreen". Personally I suggest there is a lot to be said for "just sitting offscreen" at first, as it makes you engineer proper states and so on; you'll inherently have to have a "just waiting" state and so on, which is highly important. I strongly encourage you, at first, to just 'sit your models around offscreen', don't use prefabs at first. Anyway that's another issue.)
Here then are indeed some of those peeps ...
Thank God, I was born an engineer, not a wanker "game designer" so someone else comes along and sets that sort of thing as they see fit. (And of course, indeed adds the profoundly important (from a player point of view) Components such as, you know, "the renderer", sound effects, and the like.
Note, Cress: you may notice above: because "life's like that" in the naming there (it's purely a naming issue, not a deep one) we very un-sensibly went against just what I describe here. So, notice I have a component that should be named, say, "Killableness" or perhaps just "projectileness" or indeed just "power" or "speed". Just because life's like that, I rather unsensibly named it "missile" or "projectile" rather than indeed "flight" or "attackPower". You can see this is very very bad because when I reuse the component "attackPower" (stupidly named here "missile") in the next project, which involves say robotic diggers or something rather than missiles, everyone will scream at me "Dude why is our attack power component, attached to our robotic spiders, called 'missile' instead of 'attack power', WTF?" I'm sure you see what I mean.
Note too, there's a great example there of, while Unity and GameObject have no connection at all to computer science or programming, it's totally normal - if confusing - that in writing components you have to be a master of OO since (as it happens) the current language used by Unity does indeed happen to be an OO language (but do bear in mind they could change to using Lisp or something at any time, and it wouldn't fundamentally affect Unity. As an experienced programmer you will instantly see that (for the components discussed here) I have something like a base class "Flightyness" and then there are subclasses like "ParabolaLikeFlightyness" "StraightlineFlightyness" "BirdFlightyness" "NonColliderBasedFlightyness" "CrowdNetworkDrivenFlightyness" "AIFlightyness" and so on; you can see each have the "general" settings (for Steve the game designer to adjust) and more specific settings (again for Steve to adjust). Some random code fragments that will make perfect sense to you...
public class Enemy:BaseFrite
{
public tk2dSpriteAnimator animMain;
public string usualAnimName;
[System.NonSerialized] public Enemies boss;
[Header("For this particular enemy class...")]
public float typeSpeedFactor;
public int typeStrength;
public int value;
public class Missile:Projectile
{
[Header("For the missile in particular...")]
public float splashMeasuredInEnemyHeight;
public int damageSplashMode;
Note the use of [Header ... one of the most important things in Unity! :)
Note how good the "Header" thing works, especially when you chain down through derives. It's nothing but pleasure working on a big project where everyone works like that, making super-tidy, super-clear Inspector panels for your models sitting offscreen. It's a case of "Unity got it really right". Wait until you get to the things they fucked up! :O
Please conside what Joe said but for passing multiple paramaters to a delegate you can use this iirc :
public delegate void DoStuffToTransform(params Transform[] transform);
Related
I am facing some issues with working with the "new" Unity input system. I am working on an FPS prototype, where the player can pick up different guns with different windup times. I do not want to keep track of how long the player has held the shoot key, as the Hold interaction can do that by itself.
Looking around Unity documentation, MS Intellisense suggestions, and Stackoverflow, I have not seen a single instance of something around the lines of "Override interactions of InputAction". It is simply something not being talked about. Should I take that as a red flag for what I am doing is wrong? Probably.
I can see the interactions field for an InputAction, but it is marked with the readonly keyword. There is no internal setter for it either. Otherwise, the solution would be trivial, though unusual to the Unity™ way.
float duration = 0.04f;
controls.Main.Shoot.interactions = $"Hold(duration={duration})";
// Error: "Interactions" is readonly
Is there really not a way to edit the interactions for an InputAction during runtime with code?
I am well along in learning Unity basics but would like to nail down my understanding of the relation between components and the objects that own them. In the tutorials I've been watching typically use or pass the Transform component when working with objects pulled in code. For example:
void Explode ()
{
Collider[] colliders = Physics.OverlapSphere(transform.position, explosionRadius);
foreach (Collider collider in colliders)
{
if (collider.tag == "Enemy")
{
Damage(collider.transform);
}
}
}
which calls "Damage" with the transform on the colliders it finds:
void Damage (Transform enemy)
{
Enemy e = enemy.GetComponent<Enemy>();
if (e != null)
{
e.TakeDamage(damage);
}
}
Looking at this, it is clear that it is pulling a Component found on all game objects and then using "GetComponent" to pull another component instance by name since the "Enemy" component isn't going to have its own method. Why not just pass collider.gameObject though? I tried this (after changing the Damage to expect a GameObject) and it worked fine.
This seems more intuitive to me but all of the tutorials I've seen use the transform component instead. Does anyone have any insight into why this is so? It would help me deepen my understanding of how Unity structures its code.
I'm not sure what tutorials you are following, but the ones I followed when I first started with Unity used GameObject instead of Transform.
I agree that it is more intuitive to use GameObject. Transform is a part or Component of a GameObject, but since it's mandatory it will always be there, hence the public field .transform. But since a GameObject is technically the holder of all its Componenents, it would be most logical, architecture wise, to pass that as a parameter instead.
At the end of the day, it makes little difference in your examples since you can call a lot of the same methods on both Transform as GameObject.
TLDR:
Use whatever you feel makes most sense.
Short answer
GameObject is more intuitive as you say.
Long answer
I think (my opinion) it's best to pass the most specific and relevant component.
For example, assume you have a function for a character to pick up a crate:
public void PickUp(Transform crate)
{
crate.SetParent(this.transform);
crate.localPosition = Vector3.zero; // or whatever your attachment point is
}
Here it makes sense in my mind that the Transform is passed, because it's the most specific component that will be needed.By passing GameObject here you are only delaying the inevitable GetComponent<Transform> or go.transform.
On the other hand, if you have a function to hide a create then passing the game object would be the bare minimum you need to achieve this:
public void Hide(GameObject crate)
{
crate.SetActive(false);
}
Passing anything else just delays the inevitable x.gameObject
In the explosion example I don't think that I would pass either to be honest. I would probably do this:
void Explode ()
{
Collider[] colliders = Physics.OverlapSphere(transform.position, explosionRadius);
var enemies = colliders
.Select(x => x.GetComponent<Enemy>())
.OfType<Enemy>(); // Or IHealth, IDamageable, ITarget, etc if you have other things that can take damage.
foreach (var enemy in enemies) // If empty, nothing will happen by default
{
enemy.TakeDamage(damage);
}
}
With this approach you can see that there is no need to check tags or nulls. The enemies enumerable is guaranteed to contain either enemies or nothing at all.
By always passing gameObject/transforms you will always have to worry about what it is that you are really receiving at the destination component. You will also open yourself to situations where you are not sure anymore where certain changes to you gameObjects are being made because it can be anything in the system that's making those changes. Your ColorChangerComponent could actually also be moving the object around, destroying some other components, etc. By giving the component a Renderer, it more naturally limits the component to changes on the Renderer only (although you could obviously violate this limitation unless you perform actions against appropriate interfaces).
The only time it really makes sense to pass a generic component is when you are broadcasting this 'event' to a bunch of possible receivers that will each to a different thing with the object. I.e. you can't be certain at compile time what the gameobject will be used for, or you know that it will be used for many different things.
The fact that Unity itself returns Collider[] for the collision check sort of supports this line of thinking, otherwise it would've just returns GameObject[]. The same goes for OnTriggerEnter(Collider collider) or OnCollisionEnter(Collision collision) etc.
I am fairly new to code, I know some of the basics but my knowledge is limited, so please let me know WHY in your answer if it's not too hard to explain, I'd like to learn rather than just be given the answer!
This code is the basic controls for a game i am making. I'll be explaining the premise of the game just so you're able to better grasp what my intent is.
The game will focus on the completion of mazes, however movement is restricted to only being able to go forward, and right. You may never do a u-turn, go left or go backwards.
With my current code, it is possible (VERY EASILY POSSIBLE) to just translate over the entire maze and the colliders for walls don't apply due to it being a translate, which to my understanding is essentially 'teleporting' it.
I've looked around on similar topics and discussions but I was unable to find any answers that addressed this kind of issue with colliders.
When the object collides with a 'wall' in my maze, I want the object to be reset to it's original position, or at the very least die, I'm not sure if that will affect the answer given, but just incase keep that in mind, thank you!
using UnityEngine;
using System.Collections;
public class Movement_Script : MonoBehaviour {
public float playerspeed = 1;
void Update () {
if(Input.GetKeyDown("up"))
{
transform.Translate(Vector3.up * 1);
}
if(Input.GetKeyDown("right"))
{
transform.Translate(Vector3.right * 1);
transform.Rotate(Time.deltaTime,0 ,-90);
}
}
}
When you control the transform, you are telling "Place this there".
So when you do:
transform.Translate(Vector3.up * 1); // multiplying by one is useless
you tell to displace the object by 0,1,0 from its current position. This is regardless of any environment. You still can detect collision by it won't be resolve by the engine, you would have to do it yourself.
Unity came up with a ready to use solution with the character controller component. It does the same as you do in your code but also perform a bunch of collision checks and resolve them. The latest version is actually using rigidbody for a more physical approach.
http://docs.unity3d.com/Manual/class-CharacterController.html
http://docs.unity3d.com/ScriptReference/CharacterController.Move.html
the Move method is the one to use for Unity to handle the whole process for you. I would recommend to install a basic four wall room with a few boxes here and there so that you get the grip of it. Then it will be easy to move on to do whatever you want with it, whether 2D or 3D movement.
I have a question about a XNA game I'm making, but it is also a generic question for future games. I'm making a Pong game and I don't know exactly what to update where, so I'll explain better what I mean. I have a class Game, Paddle and Ball and, for example, I want to verify the collisions between the ball with the screen limits or the paddles, but I come across 2 approaches to do this:
Higher Level Approach - Make paddle and ball properties public and on the Game.Update check for collisions?
Lower Level Approach- I supply every info I need (the screen limits and paddles info) to the ball class (by parameter, or in a Common public static class) and on the Ball.Update I check for collisions?
I guess my question in a more generic way is:
Does an object need to know how to update and draw itself, even having dependencies from higher levels that somehow are supplied to them?
or
Is better to process it at higher levels in Game.Update or Game.Draw or using Managers to simplify code?
I think this is a game logic model question that applies to every game. I don't know if I made my question clear, if not, feel free to ask.
The difficult part of answering your question is that you're asking both: "what should I do now, for Pong" and "what should I do later, on some generic game".
To make Pong you don't even need Ball and Paddle classes, because they're basically just positions. Just stick something like this in your Game class:
Vector2 ballPosition, ballVelocity;
float leftPaddlePosition, rightPaddlePosition;
Then just update and draw them in whatever order suits you in your Game's Update and Draw functions. Easy!
But, say you want to create multiple balls, and balls have many properties (position, velocity, rotation, colour, etc): You might want to make a Ball class or struct that you can instance (same goes for the paddles). You could even move some functions into that class where they are self-contained (a Draw function is a good example).
But keep the design concept the same - all of the object-to-object interaction handling (ie: the gameplay) happens in your Game class.
This is all just fine if you have two or three different gameplay elements (or classes).
However let's postulate a more complicated game. Let's take the basic pong game, add some pinball elements like mutli-ball and player-controlled flippers. Let's add some elements from Snake, say we have an AI-controlled "snake" as well as some pickup objects that either the balls or the snake can hit. And for good measure let's say the paddles can also shoot lasers like in Space Invaders and the laser bolts do different things depending on what they hit.
Golly that is a huge mess of interaction! How are we going to cope with it? We can't put it all in Game!
Simple! We make an interface (or an abstract class or a virtual class) that each "thing" (or "actor") in our game world will derive from. Here is an example:
interface IActor
{
void LoadContent(ContentManager content);
void UnloadContent();
void Think(float seconds);
void UpdatePhysics(float seconds);
void Draw(SpriteBatch spriteBatch);
void Touched(IActor by);
Vector2 Position { get; }
Rectangle BoundingBox { get; }
}
(This is only an example. There is not "one true actor interface" that will work for every game, you will need to design your own. This is why I don't like DrawableGameComponent.)
Having a common interface allows Game to just talk about Actors - instead of needing to know about every single type in your game. It is just left to do the things common to every type - collision detection, drawing, updating, loading, unloading, etc.
Once you're in the actor, you can start worrying about specific types of actor. For example, this might be a method in Paddle:
void Touched(IActor by)
{
if(by is Ball)
((Ball)by).BounceOff(this.BoundingBox);
if(by is Snake)
((Snake)by).Kill();
}
Now, I like to make the Ball bounced by the Paddle, but it is really a matter of taste. You could do it the other way around.
In the end you should be able to stick all your actors in a big list that you can simply iterate through in Game.
In practice you might end up having multiple lists of actors of different types for performance or code simplicity reasons. This is ok - but in general try to stick to the principle of Game only knowing about generic actors.
Actors also may want to query what other actors exist for various reasons. So give each actor a reference to Game, and make the list of actors public on Game (there's no need to be super-strict about public/private when you're writing gameplay code and it's your own internal code.)
Now, you could even go a step further and have multiple interfaces. For example: one for rendering, one for scripting and AI, one for physics, etc. Then have multiple implementations that can be composed into objects.
This is described in detail in this article. And I've got a simple example in this answer. This is an appropriate next step if you start finding that your single actor interface is starting to turn into more of a "tree" of abstract classes.
You could also opt to start thinking about how different components of the game need to talk to each other.
Ball and Paddle, both are objects in the game and in this case, Renderable, Movable objects.
The Paddle has the following criteria
It can only move up and down
The paddle is fixed to one side of the screen, or to the bottom
The Paddle might be controlled by the user (1 vs Computer or 1 vs 1)
The paddle can be rendered
The paddle can only be moved to the bottom or the top of the screen, it cannot pass it's boundaries
The ball has the following criteria
It cannot leave the boundaries of the screen
It can be rendered
Depending on where it gets hit on the paddle, you can control it indirectly (Some simple physics)
If it goes behind the paddle the round is finished
When the game is started, the ball is generally attached to the Paddle of the person who lost.
Identifying the common criteria you can extract an interface
public interface IRenderableGameObject
{
Vector3 Position { get; set; }
Color Color { get; set; }
float Speed { get; set; }
float Angle { get; set; }
}
You also have some GamePhysics
public interface IPhysics
{
bool HasHitBoundaries(Window window, Ball ball);
bool HasHit(Paddle paddle, Ball ball);
float CalculateNewAngle(Paddle paddleThatWasHit, Ball ball);
}
Then there is some game logic
public interface IGameLogic
{
bool HasLostRound(...);
bool HasLostGame(...);
}
This is not all the logic, but it should give you an idea of what to look for, because you are building a set of Libraries and functions that you can use to determine what is going to happen and what can happen and how you need to act when those things happen.
Also, looking at this you can refine and refactor this so that it's a better design.
Know your domain and write your ideas down.
Failing to plan is planning to fail
You could see your ball and paddle as a component of your game, and XNA gives you the base class GameComponent that has an Update(GameTime gameTime) method you may override to do the logic. Additionally, there is also the DrawableGameComponent class, which comes with its own Draw method to override. Every GameComponent class has also a Game property which holds the game object that created them. There you may add some Services that your component can use to obtain information by itself.
Which approach you want to make, either have a "master" object that handles every interaction, or provide the information to the components and have them react themselves, is entirely up to you. The latter method is preferred in larger project. Also, that would be the object-oriented way to handle things, to give every entity its own Update and Draw methods.
I agree with what Andrew said. I am just learning XNA as well and in my classes, for example your ball class. I'd have an Update(gametime) method and a Draw() method in it at the least. Usually an Initialize(), Load() as well. Then from the main game class I will call those methods from their respective cousins. This was before I learned about GameComponent. Here is a good article about if you should use that. http://www.nuclex.org/blog/gamedev/100-to-gamecomponent-or-not-to-gamecomponent
I have two classes, Human and Monster.
both have a Property called MoveBehavior
Human has HumanMoveBehavior, and Monster has MonsterMoveBehavior
I want the HumanMoveBehavior to move AWAY from Monsters, and MonsterMoveBehavior to move TOWARD Humans.
The problem I'm having is where should I put my code to move?
In the Human/Monster class?
Using this approach, I had a Move() Method, which takes a List of all entities in game, decides whether it's a Monster or Human using a method called GetListOfOpponents(List allsprites) and then runs GetNearestOpponent(List opponents);
But this looks really messy.
Should I have a SpriteController that decides where the Sprites move? I'm unsure where I need to put this code :(
Thanks!
You could think of a AIManager that just says:
foreach(GameObject go in m_myObjects) // m_myObjects is a list of all objects that require updating
{
go.Update(); // standard GameObject function
}
After that, each class should take care of its own piece of code. So updating works in the class itself.
So Human says:
// just a class which is a gameObject and also has moving behaviour
// do the same with monster
public class Human : GameObject, IMoveBehaviour
{
public override Update()
{
GoMove();
}
public void GoMove()
{
// human specific logic here
}
}
// This interface describes that some movement
// will happen with the implementing class
public interface IMoveBehaviour
{
void GoMove();
}
With using an interface, you can make the specific language part of the class and you don't have need to ALSO create some class that will handle that for you. Of course it is possible. But in real life, the human/monster is the one that is moving, not some object he is carrying.
UPDATE
Answer to the comment. Because there is an AIManager, or even a complete GameObjectManager would be nice to maintain all GameObjects, you could ask the AIManager for the placed where you could not go.
Because pathfinding is most of the time done by use of some navigation mesh or a specified grid, the GameObjectManager can return the specific Grid with all navigable points on it. You should for certain not define all positions in every monster. Because most of the time, the monster does not exactly know where everyone is (in real life). So knowing where not to go is indeed good, but knowing where everyone is, will give your AI too much advantage as well.
So think of returning a grid with the points where to go and where not to, instead of maintaining such things inside the monster/human. Always check where you should leave what, by thinking about what would be the thing in real life.
The way Valve handled this for entities in Half Life 2, is one of the better ways, I think. Instead of giving each AI its own separate Move methods and calling those, it simply called the Think() method and let the entity decide what it needed to do.
I'd go with what Marnix says and implement an AIManager that loops through each active AI in the game world, calling the Think() method of each. I would not recommended interfacing your Human class with an "IMoveBehavior" simply because it would be better to abstract that into a "WorldEntity" abstract class.
You might have invisible entities that control things like autosaves, triggers, lighting, etc, but some will have a position in the world. These are the ones who will have a vector identifying their position. Have the AI's Think() method call its own move() method, but keep it private. The only one who needs to think about moving is the AI itself.
If you want to encourage the AI to move outside of the Think) method, I would suggest some kind of imperative, such as a Goal-Oriented Action Planning (GOAP) system. Jeff Orkin wrote about this fantastic concept, and it was used in games such as F.E.A.R. and Fallout 3. It might be a bit overkill for your application, but I thought it was interesting.
http://web.media.mit.edu/~jorkin/goap.html