I am facing some issues with working with the "new" Unity input system. I am working on an FPS prototype, where the player can pick up different guns with different windup times. I do not want to keep track of how long the player has held the shoot key, as the Hold interaction can do that by itself.
Looking around Unity documentation, MS Intellisense suggestions, and Stackoverflow, I have not seen a single instance of something around the lines of "Override interactions of InputAction". It is simply something not being talked about. Should I take that as a red flag for what I am doing is wrong? Probably.
I can see the interactions field for an InputAction, but it is marked with the readonly keyword. There is no internal setter for it either. Otherwise, the solution would be trivial, though unusual to the Unity™ way.
float duration = 0.04f;
controls.Main.Shoot.interactions = $"Hold(duration={duration})";
// Error: "Interactions" is readonly
Is there really not a way to edit the interactions for an InputAction during runtime with code?
A Quick Note
This issue does not rely on 3D based code, nor logic; it simply focuses on removing the dependency of one object from another, and I am trying to be as thorough as possible in describing the issue. While having some 3D background will probably help understand what the code is doing, it is not needed to separate class A from class B. I believe this task will be solved with some logical, yet lateral thinking.
Overview
I'm refactoring some old code (written sometime in the early 90s) and there are a few classes that rely on other classes. This question will focus on a single class that relies on another single class (no other dependencies in this case). The project is a DirectX project that simply renders a few objects to the screen for working purposes. I can't really give a thorough description unfortunately; however, I can explain the problem with the code.
There are two classes that I need to focus heavily on, one of which I am currently re-writing to be generic and reusable since we now have a secondary need for rendering.
Engine3D (Currently Re-Writing)
Camera3D
I will explain in more detail below, but the gist of the situation is that Engine3D relies on Camera3D in the Render method.
Engine3D's Current Flow
The current flow of Engine3D is heavily focused on accomplishing a single goal; rendering what the project needs, and that's it.
public void Render() {
// Clear render target.
// Render camera.
// Set constant buffers.
// Render objects.
// Present back buffer.
}
The update code and the render code are all jumbled together and every object that is rendered to the screen, is located in the Render method. This isn't good for reusability as it forces the exact same scene to be rendered each time; therefore I am breaking it down, creating a generic Engine3D and then I will utilize it in my (let's call it Form1) code.
The New Flow
The idea is to make rendering objects to the screen a simple task by making a Draw call to the Engine3D and passing in the object to be rendered. Much like the old days of XNA Framework. A basic representation of the new flow of Engine3D is:
// I may move this to the constructor; if you believe this is a good idea, please let me know.
public new virtual void Initialize() {
base.Initialize();
OnInitialize(this, new EventArgs());
RenderLoop.Run(Window, () => {
if (!Paused) {
OnUpdate(this, new EventArgs());
Render();
}
});
}
protected override void Render() {
// Clear Render Target. context.ClearRenderTargetView(...);
// Set constant buffers.
OnRender(this, new EventArgs());
// Present back buffer.
}
Where OnUpdate will be utilized to update any objects on the screen, and OnRender will handle the new Draw calls.
The Issue
The issue with this is that the old flow (within the render loop) cleared the render target, then rendered the camera, then began setting up the constant buffers. I've accomplished the first in that list rather easily, the second in the list is a simple Draw call with the new flow (and can come after setting up the buffers); but the issue is setting up the constant buffers. The following lines of code require the Camera3D object and I am having issues with moving this around.
ConstantBuffers.PerFrame perFrame = new ConstantBuffers.PerFrame();
perFrame.Light.Direction = (camera.TargetPosition - camera.Position);
perFrame.CameraPosition = camera.Position;
perFrame.CameraUp = camera.Up;
context.AddResource(perFrame);
This variable is then added to the resource list of the render target which must remain in Engine3D to prevent overly complicated drawing code.
There are other objects later in the code that rely on Camera3D's World property, but once I solve how to separate the Engine3D from Camera3D, I'm sure I can take care of the rest easily.
The Question
How can I separate this dependency from the Engine3D class?
A few things I have thought of are:
Create a method that sets the buffers that must be called prior to draw.
Make these properties static on Camera3D as there is always one camera, never more.
Create a method specifically for the camera that handles this issue.
Create a middle man class to handle all of this.
Combine the Engine3D and Camera3D classes.
If there is any confusion as to what I am trying to achieve, please let me know and I will clarify the best I can.
The refactoring you want to do is called Pure Fabrication.
A proposed solution of yours is to:
Make these properties static on Camera3D as there is always one camera, never more.
I suggest that:
Instead of making them static you can create another class (name it StudioSetup) that contains the fields which are needed in Engine3D (and you are looking to make static in your Camera3D);
Populate an object of that class with current values and pass that to Engine3D->Render();
Now the dependency on Camera3D has been replaced with a dependency on StudioSetup object.
This is similar to your "Create a middleman class to handle all of this." solution. However, the middleman does not do anything except work as a one-way courier.
I’m trying to find a good way to play background music in Unity 3D. I want the music to keep playing consistently through scene loads. Don’t Destroy on load is fine and works, but every time I load the same scene, it makes another music game object because the scene itself has the game object in it. How can I solve my problem? I am a “beginner” (kind of), so I would like code I can understand.
I'd hands down recommend starting with an Asset like 'EazySoundManagerDemo'. It needs a little refactoring and refinement (ie it uses 3 arrays of audios with 3 sets of accessibility functions instead of one set with an AudioPurpose enum to increase code-reuse).
It does however solve the basic problem you have and is a good intro to using an audio manager / layer instead of simply playing audio directly from your GameObjects. Give that a shot, learn from it and then adapt it or create your own audio management layer.
Good Luck!
I recommend creating an audioSource object, then creating an script for this object and on the awake function do this:
void Awake() {
DontDestroyOnLoad(this.gameObject);
}
This will make the background music to keep playing between scenes. For more information you could use Unity's documentation about this function.
With help from a question on the unity forum, I think I have solved my problem. The link to the question is here...
https://answers.unity.com/questions/982403/how-to-not-duplicate-game-objects-on-dontdestroyon.html
The Best Answer is the one I’m using.
The code is this...
private static Player playerInstance;
void Awake(){
DontDestroyOnLoad(this);
if (playerInstance == null) {
playerInstance = this;
} else {
Destroy(gameObject); // Used Destroy instead of DestroyObject
}
}
I want many GameObjects on the scene to have the Y position "animated" programmatically. I have a class, let's call it "TheGameObject", which doesn't inherit from MonoBehaviour and as such can't have the Update() function that I need to achieve the Y movement of the GameObjects.
I decided to try using a delegate but then a problem came up: I can pass only one Transform to the delegate.
Here's the code for the delegate in a static class, let's call it "staticClass", that derives from MonoBehaviour:
public delegate void UpdatingEventHandler(Transform t);
public static event UpdatingEventHandler Updating;
void Update() {
if(Updating != null)
Updating(/*Transform that should be passed*/);
}
And this is the code in the "TheGameObject" class:
public GameObject gameObject { get; set; }
private void spawn() {
GameObject go = new GameObject();
staticClass.Updating += animateYpos(go.transform);
}
void animateYpos(Transform t) {
//modify t.position.y
}
Is there a way to pass each respective Transform to the delegate in order to call Updating() in the static class Update() and have all the GameObjects to move respectively?
The problem isn't the type of the parameter that is passed but which Transform is passed so that each different Transform will have its own Y position modified.
This is sort of totally wrong, Cress!
It's very common for experienced developers to not really understand that Unity
is not object oriented, and has utterly no connection - at all - to concepts like inheritance.
(Sure, the programming language currently used for writing components in Unity, happens to be OO, but that's largely irrelevant.)
Good news though, the solution is incredibly simple.
All you do in Unity is write behaviors that do what you want.
Essay about it ... https://stackoverflow.com/a/37243035/294884
Try to get this concept: say you have a "robot attack device" in your game.
So, the only thing in Unity scenes is GameObjects. (There is nothing else - at all.)
Your "robot attack device" would have these behaviors. ("Behaviours" meaning components attached to it. Componantes are MonoBehavior.)
"robot attack device"
• animate Z position
• shoot every ten seconds
• respond to bashes with sound effect
and so on.
Whereas, your "kawaii flower" might have these behaviors
"kawaii flower fixed NPC"
• shoot every 5 seconds
• respond to bashes with sound effect
• rainbow animation
In this case you're asking how to write a "animate Y" behavior.
It's very easy, first note that everything - everything - you do in Unity you do with an extension, so it will be like
public static void ForceY( this Transform tt, float goY )
{
Vector3 p = tt.position;
p.y = goY;
tt.position = p;
}
Quick tutorial on extensions: https://stackoverflow.com/a/35629303/294884
And then regarding the trivial component (all components are trivial) to animate Y, it's just something like
public class AlwaysMoveOnYTowards:MonoBehaviour
{
[System.NonSerialized] public float targetY;
void Update()
{
float nextY = .. lerp, or whatever, towards targetY;
transform.ForceY(nexyT);
}
}
Note that, of course, you just turn that component on and off as needed. So if the thing is asleep or whatever in your game
thatThing.GetComponent().enabled = false;
and later true when you want that behavior again.
Recall, all you're doing is making a model for each thing in yoru game (LaraCroft, dinosaur, kawaiiFlower, bullet .. whatever).
(Recall there are two ways to use models in Unity, either use Unity's prefab system (which some like, some don't) or just have the model sitting offscreen and then Instantiate or use as needs be.)
{Note that a recent confusion in Unity (to choose one) is: until some years ago you had to write your own pooling for things that you had a few of. Those days are long gone, never do pooling now. Just Instantiate. A huge problem with Unity is totally out-of-date example code sitting around.}
BTW here's the same sort of ForceY in the case of the UI system
public static void ForceYness(this RectTransform rt, float newY)
{
Vector2 ap = rt.anchoredPosition;
ap.y = newY;
rt.anchoredPosition = ap;
}
Finally if you just want to animate something on Y to somewhere one time, you should get in to the amazing
Tweeng
which is the crack cocaine of game engineering:
https://stackoverflow.com/a/37228628/294884
A huge problem with Unity is folks often don't realize how simple it is to make games. (There are many classic examples of this: you get 1000s of questions asking how to make a (totally trivial) timer in Unity, where the OP has tied themselves in knots using coroutines. Of course, you simply use Invoke or InvokeRepeating 99% of the time for timers in Unity. Timers are one of the most basic parts of a game engine: of course, obviously unity through in a trivial way to do it.)
Here's a "folder" (actually just a pointless empty game object, with these models sitting under it) holding some models in a scene for a game.
those are some of the enemies in the game. As an engineer I just popped them in there, the game "designer" would come along and set the qualities (so, the potato is fast, the turnip is actually an unkillable boss, the flying head connects to google maps or whatever).
Same deal, this is from the "folder" (aside, if you use an empty game object as a, well, folder to just hold shit, it's often referred to as a "folder" - it's no less a game object) with some "peeps" ("player projectiles"). Again these are the very models of these things to be used in the game. So obviously these would just be sitting offscreen, as you do with games. Look, here they are, literally just sitting around an "-10" or something outside of the camera frustrum (a good place to remember a Unity "camera" is nothing more than ............. a GameObject with certain components (which Unity already wrote for our convenience) attached.)
(Again: in many cases you may prefer to use prefabs. Unity offers both approaches: prefabs, or "just sitting offscreen". Personally I suggest there is a lot to be said for "just sitting offscreen" at first, as it makes you engineer proper states and so on; you'll inherently have to have a "just waiting" state and so on, which is highly important. I strongly encourage you, at first, to just 'sit your models around offscreen', don't use prefabs at first. Anyway that's another issue.)
Here then are indeed some of those peeps ...
Thank God, I was born an engineer, not a wanker "game designer" so someone else comes along and sets that sort of thing as they see fit. (And of course, indeed adds the profoundly important (from a player point of view) Components such as, you know, "the renderer", sound effects, and the like.
Note, Cress: you may notice above: because "life's like that" in the naming there (it's purely a naming issue, not a deep one) we very un-sensibly went against just what I describe here. So, notice I have a component that should be named, say, "Killableness" or perhaps just "projectileness" or indeed just "power" or "speed". Just because life's like that, I rather unsensibly named it "missile" or "projectile" rather than indeed "flight" or "attackPower". You can see this is very very bad because when I reuse the component "attackPower" (stupidly named here "missile") in the next project, which involves say robotic diggers or something rather than missiles, everyone will scream at me "Dude why is our attack power component, attached to our robotic spiders, called 'missile' instead of 'attack power', WTF?" I'm sure you see what I mean.
Note too, there's a great example there of, while Unity and GameObject have no connection at all to computer science or programming, it's totally normal - if confusing - that in writing components you have to be a master of OO since (as it happens) the current language used by Unity does indeed happen to be an OO language (but do bear in mind they could change to using Lisp or something at any time, and it wouldn't fundamentally affect Unity. As an experienced programmer you will instantly see that (for the components discussed here) I have something like a base class "Flightyness" and then there are subclasses like "ParabolaLikeFlightyness" "StraightlineFlightyness" "BirdFlightyness" "NonColliderBasedFlightyness" "CrowdNetworkDrivenFlightyness" "AIFlightyness" and so on; you can see each have the "general" settings (for Steve the game designer to adjust) and more specific settings (again for Steve to adjust). Some random code fragments that will make perfect sense to you...
public class Enemy:BaseFrite
{
public tk2dSpriteAnimator animMain;
public string usualAnimName;
[System.NonSerialized] public Enemies boss;
[Header("For this particular enemy class...")]
public float typeSpeedFactor;
public int typeStrength;
public int value;
public class Missile:Projectile
{
[Header("For the missile in particular...")]
public float splashMeasuredInEnemyHeight;
public int damageSplashMode;
Note the use of [Header ... one of the most important things in Unity! :)
Note how good the "Header" thing works, especially when you chain down through derives. It's nothing but pleasure working on a big project where everyone works like that, making super-tidy, super-clear Inspector panels for your models sitting offscreen. It's a case of "Unity got it really right". Wait until you get to the things they fucked up! :O
Please conside what Joe said but for passing multiple paramaters to a delegate you can use this iirc :
public delegate void DoStuffToTransform(params Transform[] transform);
Using MonoGame (Basically XNA) I have some code which allows you to host a DirectX11 window inside of a System.Windows.Controls.Image, the purpose of which is to allow you to display the window as a standard WPF control.
I created this code by looking at a number of online code examples which demonstrated similar functionality (as I am a complete newbie to game dev). Among some of the code that I have leveraged there is a method of specific interest to me which looks like this:
private static void InitializeGraphicsDevice(D3D11Host game, int width, int height)
{
lock (GraphicsDeviceLock)
{
_ReferenceCount++;
if (_ReferenceCount == 1)
{
// Create Direct3D 11 device.
_GraphicsDeviceManager = new WpfGraphicsDeviceManager(game, width, height);
_GraphicsDeviceManager.CreateDevice();
}
}
}
This code is called on the creation of the hosting object (i.e. System.Windows.Controls.Image) and clearly it appears the intent is to limit the creation of multiple GraphicsDeviceManagers. However I have ended up in the situation where this code prevents me from creating multiple game windows, as needed.
I have changed this code from static to instance and removed the counter and everything seems to be working fine BUT I am concerned that there is something fundamental I don't understand which might come up later.
So, why does the above code prevent creating multiple DeviceManagers? Is it legal for me to create multiple graphics device managers in XNA (MonoGame)? I have to assume there must have been a reason for it?
I think it's because of the fundamental design thought behind xna. You have one game loop, one window for graphic output and so on.
If I remember correctly it should be no problem to create multiple graphic devices on different handles (in your case different windows).