Unity Camera.onprerender & Camera.onPreCull - c#

So I've been researching how to make certain players (both offline and online for consoles and PC) invisible to other players in Unity for both 2D and 3D. I know having a separate layer for each player and their camera isn't efficient or effective and I was looking for something better. After days of research I finally found these:
Camera-onPreRender, Camera-onPreCull, and Making GameObejcts dynamically invisible
But I'm still very confused.
Are the public void MyPreRender(Camera cam) & public void MyPreCull(Camera cam) delegates or something and the Enable/Disable just changing the value of camera to exclude game objects listed in the function? If so shouldn't they be labeled as a delegate to work? If not how does this function change the value of what game objects should or shouldn't be culled/rendered?
Also would this work well for what I'm doing with little hit on performance and frame rate? The other person said it did but does it really? Is there a better and faster way?

Yes, MyPreRender and MyPreCull are delegates (or rather, a method that matches the signature of a defined delegate elsewhere). Specifically, they are Event Handler methods.
When the camera does a Render (or Cull) task, it first invokes all methods that have been subscribed to the PreRender (or PreCull) events (through the use of the += to tell the other system about your handler method).
You can find out more about events from this Unity tutorial.

Related

Unity Ar foundation body tracking. Is it possible to track multiple bodies?

If you use the latest version of AR foundation (I had to manually edit Packages/manifest.json to get it) and also have an iphone with an A12 processor or later, you can have body tracking.
You can track 3d or 2d human bodies.
I am interested in the 2d tracking.
In Unity's demo they are using this line of code to get the human body data
var joints = m_HumanBodyManager.GetHumanBodyPose2DJoints(Allocator.Temp);
Which seems to only return info for one body even if the phone can see 2 or more.
I am wondering if tracking multiple bodies is possible, some functions of the humanBodyManager class seem to hint at this being possible. eg.
This function
OnTrackablesChanged(List<ARHumanBody>, List<ARHumanBody>, List<ARHumanBody>)
List<ARHumanBody> added
The list of human bodies added to the set of trackables.
List<ARHumanBody> updated
The list of human bodies updated in the set of trackables.
List<ARHumanBody> removed
The list of human bodies removed to the set of trackables.
and this event
public event Action<ARHumanBodiesChangedEventArgs> humanBodiesChanged
The event that is fired when a change to the detected human bodies is reported.
That function is protected, and I have tried subscribing to that event but it doesnt ever seem to be called. Its also very weird as its an event which wants to call functions which return an action that takes ARHumanBodiesChangedEventArgs as a paramater... I dunno why anyone would ever write a thing like that tbh.
Anyway writing visual debug code and then building to iOS to test all these semi documented classes is a massive pain. If someone could just let me know if multiple body tracking is even possible with ar foundation that would help me a lot. Thanks!

How can I realise different actions (e.g. change scenery, interact with npc, pick up items) using the input system of Unity?

I am using the (new) input system of Unity and have the problem to realise an interaction with different objects. I am still quite new to Unity and hope to find some help.
I have created an event (OnInteract) that listens to whether e has been pressed.
Next I created an object with a collider into which my player can run. Whenever the player meets a certain collider, something should happen. In my case I want to change the scene. However, on my initial scene there are two doors that the player can open by pressing e. I have given both doors the same layer name because they are both exits.
Basically, it works that I can only press e when I hit this particular collider. However, I don't know how to do it instead of performing two different actions. Maybe there is a way to give the objects a script that I can trigger via the PlayerMovement script. Without taking it into the player.
this is my script that works so far:
void OnInteract(InputValue value)
{
if (myBoxCollider.IsTouchingLayers(LayerMask.GetMask("Entrance")))
{
Debug.Log("interact with the door");
}
}
or is there perhaps a way to listen to the "tag" instead of the layer?
I have also come across interfaces, but have not really understood how these help me. Or how I have to use them here to make them work.
In the end you're going to have to check some kind of tag, layer, or reference a component. it's just your preference. To make it a little simpler you can shoot a raycast in the direction you are looking to check for objects instead of having colliders. But in the end it's doing the same thing.
I don't really use interfaces as I'm pretty new myself but from what I can tell they are generally used for organization and global accessibility in similar objects, like doors lol.

Why not finish all the functions within Update() function in unity

void Update() can perform the functions per 1 frame . So in script , e.x like
void OnTriggerEnter(){} . why don't put it in update() function .I know there's some misunderstanding here but I just can't explain it to myself .Besides , is that only void Update() , void Start(){} that functio in unity .Including some functions like void OnTriggerEnter can function as well in unity since it is built-in functions .How about those functions that written by us like public void SwitchAvatar() .Can it function if it is not referred inside void Update(){} .I know the questions above may sound stupid , but dunno why I can't tell the answers .All of your help is greatly appreciated . Thanks !
Alright, let's open pandoras box about magic methods in Unity.
First, there are two types of classes that you can inherit from, to access magic methods: MonoBehaviour and ScriptableObject. Both of them offer different things and the latter is mainly used to serialize data outside of scenes. Also, ScriptableObject has way less magic methods, compared to MonoBehaviour.
Second: A MonoBehaviour has a lifecycle. Where you are in this lifecycle determines what methods are called by the engine.
The following graphic shows you the whole lifecycle of a MonoBehaviour:
(Source: Unity - Manual: Order of Executions for Event Functions)
As you can see, the object gets instantiated and Awake is called. Before the first time Update is called, the engine calls the Start method. There is a difference between Awake and Start: Awake is comparable to a constructor. It is called without anything external being guaranteed to exist (like other components on your GameObject). When Start is called, all the other components on the object are initialized and can be accessed by a GetComponent call.
Now to Update, FixedUpdate and all the other events:
Unity has two separate cycles it iterates over. One for physics and one for everything else. Because calculating physics is expensive and needs precision, it is called in fixed, distinct time steps. You can actually set them in the project settings in the "Time" tab.
Now if you want to modify anything related to physics (like the velocity of a rigidbody), you should do that in FixedUpdate, because it runs in the same time step as the physics engine (PhysX or Box2D, depending on your usage of colliders).
Update on the other hand runs as often as possible. The current time distance between two Update calls can be observed by calling Time.deltaTime, which is the time that is passed between two Update calls. Note that Time.fixedDeltaTime is always the same, as it is the time between two physics calls.
The other event methods are called as responses to either the editor internal update loop, the rendering loop or the physics loop. Whenever an object collides in the physics calculation, OnCollisionEnter is called. Note that this can't happen out of Update, because, as we know, Update is not used to calculate physics.
Okay, a tl;dr:
Start and Update are not the only methods that exist in the MonoBehaviour lifecycle, there are plenty and each has its purpose. Physics are calculated in a different timescale as the Update method is called and thus cannot be part of Update.
And one thing to take away: You should really read the manual of the game engine you are using. There is plenty of stuff that you should know when you are writing code for a real time application and if you build onto an existing game engine, you should check the docs of that regularly. Especially the documentation of Unity is a very good read for both code and editor usage.

In Unity, how does Unity magically call all "Interfaces"?

Unity has an "interface":
IPointerDownHandler (doco)
You simply implement OnPointerDown ...
public class Whoa:MonoBehaviour,IPointerDownHandler
{
public void OnPointerDown (PointerEventData data)
{ Debug.Log("whoa!"); }
}
and Unity will "magically" call the OnPointerDown in any such MonoBehavior.
You do NOT have to register them, set an event, nor do anything else.
All you do syntactically is add "IPointerDownHandler" and "public void OnPointerDown" to a class, and you can get those messages magically.
(If you're not a Unity dev - it even works if you suddenly add one in the Editor while the game is running!)
How the hell do they do that, and how can I do it?
So, I want to do this:
public interface IGetNews
{
void SomeNews(string s);
}
and then I can add SomeNews to any MonoBehavior.
The alternate solutions are obvious, I want to know specifically how Unity achieve that "magic" behavior.
(BTW: I feel they should not have called these "interfaces", since, it's basically nothing at all like an interface - it's sort of the opposite! You could say they magically made a way to inherit from more than one abstract class, I guess.)
Aside:
if you've not used Unity before, the conventional way to do this - since we don't have access to Unity magic - is just add a UnityEvent to your daemon which will be sending the message in question:
public class BlahDaemon:MonoBehaviour
{
public UnityEvent onBlah;
...
onBlah.Invoke();
Say you have classes Aaa, Bbb, Ccc which want to get the message. Simply connect the Unity event (either by dragging in the editor or in code), example:
public class Aaa:MonoBehaviour
{
void Awake()
{
BlahDaemon b = Object.FindObjectOfType<BlahDaemon>();
b.onBlah.AddListener(OnBlah);
}
public void OnBlah()
{
Debug.Log("almost as good as Unity's");
}
}
You're basically "registering" your call in Awake, you are indeed piggybacking on the magic Unity use - whatever it is. But I want to use the magic directly.
When it comes to XXXUpdate, OnCollisionXXX and other MonoBehaviours, the way Unity registers is not reflection as it has been widely believed but some internal compilation process.
HOW UPDATE IS CALLED
No, Unity doesn’t use System.Reflection to find a magic method every time it needs to call one.
Instead, the first time a MonoBehaviour of a given type is accessed the underlying script is inspected through scripting runtime (either
Mono or IL2CPP) whether it has any magic methods defined and this
information is cached. If a MonoBehaviour has a specific method it is
added to a proper list, for example if a script has Update method
defined it is added to a list of scripts which need to be updated
every frame.
During the game Unity just iterates through these lists and executes methods from it — that simple. Also, this is why it doesn’t matter if
your Update method is public or private.
http://blogs.unity3d.com/2015/12/23/1k-update-calls/
In the case of an interface, I would assume it does a bit more since the interface is required. Else, you would just add the method like any other MonoBehaviour methods.
My assumption (that could be wrong), it uses a basic GetComponents on this GameObject. Then iterate the resulting array and call the method that HAS TO BE implemented since it is from the interface.
You could reproduce the pattern with:
NewsData data;
if(GetNews(out data))
{
IGetNews [] getNews = data.gameObject.GetComponents<IGetNews>();
foreach(IGetNews ign in getNews){ ign.SomeNews(); }
}
GetNews is a method that checks if some news should be sent to the object. You could think of it like Physics.Raycast that assigns values to a RaycastHit. Here it fills a data reference if that object is meant to receive news for any valid reasons.
You can use reflection to get all types in an assembly that implements a specific interface and then instantiate those types and call the methods on those instances through the interface.
var types = this.GetType().Assembly.GetTypes()
.Where(t=>t.GetInterfaces().Contains(typeof(IGetNews)));
foreach (var type in types)
{
var instance = (IGetNews) Activator.CreateInstance(type);
instance.SomeNews("news");
}
The UI-dependent built-in interfaces like IPointerDownHandler, IDragHandler Etc are called by EventsSystem class/script [this is attached on the EventSystem GameObject that is created automatically, when you create UI/Canvas object] and only work on UI Elements [for testing if you turn off or delete the EventSystem GameObject from the scene or even disable EventsSystem script, these interfaces will not be called and all UI elements will stop working (functionality point-of-view, means your register functions will not be called)].
So, these interfaces methods didn't get called as magically on their own. These are called via EventsSystem script.
Read About Event system: CLICK HERE
There are 3 main components that you need to remember for interaction with the UI elements in Unity:
GraphicRaycaster: It is attached to the Canvas object itself. It is responsible for sending the raycasts to UI elements of that canvas and determines if any of them have been hit. if you remove it from the canvas, no interaction can happen with UI elements of that canvas like click, scroll Etc and these interfaces will not also call. [LINK FOR MORE][2]
InputSystemUIInputModule:: this is attached on EventSystem Gameobject is responsible to tell canvases in the whole Unity scene, What to consider as input for the UI and vice versa. Like what will mouse left-click on UI to consider as input to UI elements, etc.
and It calls method link OnPointDown, OnDragStarted Etc interface related. Read More: LINK
EventSystem: it is responsible for processing and handling UI events in a whole Unity scene. It doesn't work independently and required BaseInputModules to work properly and it also maintains elements' status or user interactions. For Details: LINK
Just for understanding, consider it as a story: The EventSystem uses InputSystemUIInputModule to get input from your mouse, keyboard or touch and on the bases of these inputs, the EventSystem calls to does RayCast for whether you have interacted with any element or not (save references of that element in it) if yes then call built-in functions like hover, select, mouse down/up, drag canceled on that element based on life cycle (the mouse/touch pointed elements are stored in EventSystem.current) via InputSystemUIInputModule.
Now, if you want to call any IPointerDownHander method, maybe they do like this internally on click on the element of the UI and vice versa:
IPointerDownHander pointerDownHander = EventSystem.Current.GetComponent<IPointerDownHander>(); //assumption for making an understanding with the interface being cast to object, if that interface is attached to object then that object will be returned and you will be able to call the interface registered method.
if(ipd) ipd.OnPointerDown(var etc)
or below code Copied from Unity UI Package, where you can learn more accurately about this execution
// Invoke OnPointerDown, if present.
var newPressed = ExecuteEvents.ExecuteHierarchy(currentOverGo, eventData, ExecuteEvents.pointerDownHandler);
if (newPressed == null)
newPressed = ExecuteEvents.GetEventHandler<IPointerClickHandler>(currentOverGo); //copied from the Unity UI package

XNA AI: Managing Enemies on Screen

I have two classes, Human and Monster.
both have a Property called MoveBehavior
Human has HumanMoveBehavior, and Monster has MonsterMoveBehavior
I want the HumanMoveBehavior to move AWAY from Monsters, and MonsterMoveBehavior to move TOWARD Humans.
The problem I'm having is where should I put my code to move?
In the Human/Monster class?
Using this approach, I had a Move() Method, which takes a List of all entities in game, decides whether it's a Monster or Human using a method called GetListOfOpponents(List allsprites) and then runs GetNearestOpponent(List opponents);
But this looks really messy.
Should I have a SpriteController that decides where the Sprites move? I'm unsure where I need to put this code :(
Thanks!
You could think of a AIManager that just says:
foreach(GameObject go in m_myObjects) // m_myObjects is a list of all objects that require updating
{
go.Update(); // standard GameObject function
}
After that, each class should take care of its own piece of code. So updating works in the class itself.
So Human says:
// just a class which is a gameObject and also has moving behaviour
// do the same with monster
public class Human : GameObject, IMoveBehaviour
{
public override Update()
{
GoMove();
}
public void GoMove()
{
// human specific logic here
}
}
// This interface describes that some movement
// will happen with the implementing class
public interface IMoveBehaviour
{
void GoMove();
}
With using an interface, you can make the specific language part of the class and you don't have need to ALSO create some class that will handle that for you. Of course it is possible. But in real life, the human/monster is the one that is moving, not some object he is carrying.
UPDATE
Answer to the comment. Because there is an AIManager, or even a complete GameObjectManager would be nice to maintain all GameObjects, you could ask the AIManager for the placed where you could not go.
Because pathfinding is most of the time done by use of some navigation mesh or a specified grid, the GameObjectManager can return the specific Grid with all navigable points on it. You should for certain not define all positions in every monster. Because most of the time, the monster does not exactly know where everyone is (in real life). So knowing where not to go is indeed good, but knowing where everyone is, will give your AI too much advantage as well.
So think of returning a grid with the points where to go and where not to, instead of maintaining such things inside the monster/human. Always check where you should leave what, by thinking about what would be the thing in real life.
The way Valve handled this for entities in Half Life 2, is one of the better ways, I think. Instead of giving each AI its own separate Move methods and calling those, it simply called the Think() method and let the entity decide what it needed to do.
I'd go with what Marnix says and implement an AIManager that loops through each active AI in the game world, calling the Think() method of each. I would not recommended interfacing your Human class with an "IMoveBehavior" simply because it would be better to abstract that into a "WorldEntity" abstract class.
You might have invisible entities that control things like autosaves, triggers, lighting, etc, but some will have a position in the world. These are the ones who will have a vector identifying their position. Have the AI's Think() method call its own move() method, but keep it private. The only one who needs to think about moving is the AI itself.
If you want to encourage the AI to move outside of the Think) method, I would suggest some kind of imperative, such as a Goal-Oriented Action Planning (GOAP) system. Jeff Orkin wrote about this fantastic concept, and it was used in games such as F.E.A.R. and Fallout 3. It might be a bit overkill for your application, but I thought it was interesting.
http://web.media.mit.edu/~jorkin/goap.html

Categories