Pre Notes
I'm creating a 4-player, multiplayer game.
I have 4 identical controllers that are like USB SNES controllers.
All the controllers work and input great.
I want this game to work on multiple platforms, devices, etc.
Here's the issue:
When I start the game, the controllers are auto-mapped as follows...
Player 1: controller 1
Player 2: controller 2
Player 3: controller 3
Player 4: controller 6
I'm assuming that if I change the order the controllers are plugged in, or run this on a different computer/device, or use different controllers, etc., the input mapping will certainly not be what is auto-mapped to above.
My Question:
I do have a script that detects which controller is inputing. (e.g. the JoyNum, which is how I figured out that Player 4 was controller 6)
Given that, is it possible to set Unity's default "Input Manager's" JoyNums at runtime to compensate?
My thoughts were to create a screen where everyone pushes start to join the game. At that point, I will be able to detect all the controllers. (E.g. which JoyNum each player is) The last step will be re-mapping Unity's Input Manager. Is this possible?
Thanks!
PS: I searched and found "Custom Input Manager" on Git, https://github.com/daemon3000/InputManager
However, the project doesn't build, and I have no idea how to implement it.
So, after much trial and error, I came up with a solution that works. (Probably not the cleanest/recommended approach though)
Step 1) Add every Joystick Axis (all 28), for every controller (all 8) in your Input Manager. (A script can make this very easy/much less time consuming)
Step 2) Create your own Input Mapper. (Detect the "up", and "left" of every controller.)
Step 3) Save those results to the "PlayerPrefs" file.
Step 4) Create your own Input detection script. (The only things that needs to access Unity's Input Manager are the Joystick Axis'.) The other buttons can be done manually from the (strings) loaded from the PlayerPrefs file.
Pros of this approach:
Works as a catch all. (For any controller, device, computer, etc.)
Is very stable!
Allows for maximum control over all inputs and what they do.
Cons:
Seems a little ridiculous to have to do all this.
Fairly time consuming. (But can easily be used again on future projects)
Seems sloppy, but works great.
Related
I am currently using the NVIDIA FleX package in Unity3D to create soft-bodied, jelly objects. I'm using Unity for animation only, not game dev.
What I am aiming to make is a transparent, jello sphere that retains its spherical shape with elasticity.
The first way I've tried to achieve this is using Flex Array + fluid setting. I've been playing with the settings but I can't get it to remain a sphere, it just becomes a more/less viscous fluid blob.
The second way is using the Flex Soft + fluid setting. It is much better in terms of physics but even with "draw particles" off, but the water droplets are each separated and not one jelly sphere.
This is what it looks like before hitting play, where the left is with Flex Array and the right is Flex Soft. The particles for Array are visible but not for Soft.
This is after hitting play, where the Array becomes one viscous fluid, but not a sphere, and the Soft is very jello-like but the water droplets are all separated.
A solution for either of the two ways would be much appreciated!
the standard approach is to create an Nvidia Flex Controller first...
Then you should also create a Flex Soft Asset...
Then you should create or select a game object and through the Add Component tab in the game object's inspector, find the Flex Soft Actor component [see it loaded up in the image below]...
Ensure your Soft Actor Asset mentioned previously has your required mesh type selected in the inspector option [I chose sphere in the image here] and check to see it looks something like the image below to be sure...
So after that, hopefully, you can just press play and see it in action as it drops and contorts for you.
If not, I have created a quick example for you to download as a unitypackage.
It may still require further resolution with the package manager as the Flex plugin is already inside the package I'm providing here[Using Unity 2020.3.5f1]
Flex in unity package
Anyway, hope this gets you started and somewhere towards your goal with Flex.
As a bonus, I've added a small script to move the flex object as this is outside of the usual approach as we have to call to the NVidia Flex component class of choice and invoke the ApplyImpulse method.
Cheers :)
Edit: There are a small 3 set of tutorials from NV Gameworks on integrating the plugin with Unity and exampling some stuff - this "stuff" is included in my downloadable package provided above.
Here is the youtube link to the 3 set:
Nvidia Gameworks FleX tutorials on Youtube
Edit 2: rereading your question made me think I hadnt really given you the definitive answer as to using a cloth actor and having the mesh renderer deform via the flex cloth deform component.
I am providing another link to another unity package here that will show this in action also allowing you to see the game object and how the cloth component from NVIDIA Flex works with the standard mesh filter and mesh renderer. Hope this more accurately answers your question :)
Example also using Cloth Actors as well as Soft Actors in NVIDIA FleX
Recently I've been messing around with machine learning and I wanted to see if I could create AI for the game I'm currently making. The AI should be able to solve the puzzle for you.
The game currently works as followed. You have a few tiles in a grid, some of them are movable some of them aren't. You click on a tile you want to move, and you drag it into a direction. It'll then start moving the tiles and optionally also the player character itself. The end goal is to reach the end tile. Level example, Solving the level
Playing the game yourself:
Whenever you select a tile (you do this by clicking), you then hold the mouse button down, and drag onto the direction you want the tile to move towards. Once the tiles are done moving, the player object will move one step in the same direction. If the player is on top of a tile that you move, it'll move with the tile. And afterwards do another step in the same direction.
I was wondering if it's possible (and if so, how) for machine learning to define a position on the screen, (optionally) click and then define a movement direction?
Please keep in mind that I'm fairly new to machine learning!
To give some more clarification:
The grid is static for now, to keep it simple for the AI. But later one, the goal is to generate a level randomly, and see if it can solve it.
In theory, all the AI should have to do, is select a tile to move (A number between 0 and the width of the grid, and the same for the height). And define a movement direction. Either (0, 1), (0, -1), (1, 0) or (-1, 0).
Falling off the grid will results in a reset.
Reaching the end of the grid results in a win.
Moving in an invalid direction results in a reset.
Based off of your bullet points, I would honestly suggest just implementing the A* Pathfinding algorithm, with some modifications to emulate machine learning. The A* Pathfinding algorithm determines the best path on a grid from point a to point b, and using clever programming you could achieve the result you want with a reasonable amount of overhead.
Something along the lines of having a list of "do not touch" grid points(death traps, etc), which gets filled as the AI runs into them, so on the next iteration it knows not to take that path. This is a very basic abstraction of your idea, but would be highly obtainable.
Obviously we cannot write the code for you, luckily there are tons of resources on A* Pathfinding to help you get started!
Here is a simple tutorial
Here is an implementation that was used in Unity
Here is a code review on someones implementation
Assuming you actually want to use machine learning and not just a pathing system:
I will lay out some pseudo code that you can use for a basic scenario of the AI learning a static board. There are different ways you can write and implement this code, I have only suggested one way. But before we get to that lets first discuss this project overall and some suggestions for it.
Suggestions:
I would say that you will want to measure the game state on the board, and not the mouse movements. So basically the AI is measuring what moves can be made. The mouse movement part is just a way for the player to interact with the board so it is not needed by the AI. It will be simpler to just let the AI make the moves directly.
I don't think that unity is a good platform for this kind of experimentation. I think you would be better off programming this in a console program. So for example using a 2 dimensional array (board) in a visual studio c# console program, or in a C console program via CS50 IDE (comes with free sign up via edx.org for cs50 https://manual.cs50.net/ide). I have suggested these because I think Unity will just add unnecessary layers to a machine learning experiment.
My assumption is you want to learn machine learning, and not just how to make an ai solve a puzzle in your game. Because in the latter case better options would be a proper pathing system, or having the ai brute force several attempts at the puzzle before moving and select the solution with the fewest steps.
Pseudo Code:
Now onto some pseudo code for your machine learning program.
Assumptions:
A. You have a board with set dimensions that you can pass to the Ai at the start.
B. There are tiles on the board the AI cannot move into (obstacles).
C. The AI should learn to solve the problem, instead of having the answer at the beginning because of good code that we designed (like a decent pathing system).
D. We don't want the AI to brute force this by trying a billion different combinations before moving, because this suggests perfect understanding of its environment. If the ai has perfect understanding of its environment then yes, it should use brute force where reasonable.
Coding Logic:
Scenario 1: The AI plays on the same board every time with the same starting conditions.
I. You start by setting a discrete amount of time in which the AI makes a move. For example 1 move every 1 second.
II. Have a counter for the number of moves made to reach the end tile, and record the sequence of moves associated with this counter.
III. If the AI has no history with which to make a move it makes a move in a random direction.
IV. If the move is invalid then the counter increases and the move is recorded, but the AI stays on the same tile.
V. When the AI completes the puzzle the counter and sequence of moves is stored for later use.
VI. In subsequent play throughs the AI always starts by selecting the paths it has tried with smallest count.
VII. Once the AI begins moving it has a 1% chance per move to try something different. Here is an example. When the 1% is triggered the AI has a 50% to try one of the following:
a. 50% chance: It checks through all the sequences in its history to see if there is any section in the past sequences where the counter between its current tile and the finish tile is shorter than its current path. If there are multiple it selects the shortest. When the AI finishes the round it records the new total sequence taken.
b. 50% chance. The Ai makes a move in a random direction. If it made a move in a random direction. Subsequent moves again follow this logic of 50% chance check, and 50% chance move randomly again. When completed again record the sequence of moves.
VIII. You can seed this by making the AI run the puzzle a 10,000 times in a few seconds behind the scenes, and then when you observe it afterwards it should have selected a reasonable path.
If a computer can brute force a problem in reasonable time it should start with that. However bear in mind that machine learning in a computer program where the machine already knows all the variables is different from machine learning in the environment, where for example you have a robot that has to navigate an unknown environment. The above code should work in the latter case. You may also want to investigate the idea of the AI mapping out the entire terrain by trying to move to every tile and forming an understanding of the environment, then just brute forcing a solution once it understands the variables.
In a non static environment you will want to enhance the valuation system. This answer is already too long so I won't go into it.
Short answer to both questions: Yes,
You can create an ai that uses either gamestate (so it can read the objects/properties of your grid) or you could use raw-screen input combined with image processing, which is a hard thing to create, and expensive (computational) to run.
On the Unity forms there are several answers to the question "How to mimic mouse input" or alike. Take a look here:
https://answers.unity.com/questions/564664/how-i-can-move-mouse-cursor-without-mouse-but-with.html
If you are looking for the code for the AI, sadly, you are out of luck. There are lots of ai tutorials online to create a simple ai for such a game. I would advice not to dive head-first in the fancy stuff (like neural networks) and start simple. It would be the best, in my opinion, too start with creating an (class) structure for your ai, and start learning AI by practice. Start with an "AI" that just randomly returns something, then see what you can learn & manage online and make other versions.
For one of the first AI's, take a look into goal-driven AI's or state-machines. I think they should be able to give nice results, given your gifs.
so I am a beginner at unity and cannot grasp how you are supposed to have a pattern or architecture with unity.
I am currently making a platform game which has a character that is supposed to stay on moving platforms and not fall off the screen (very basic). On the platform there are "monsters". If you touch these monsters you also lose. There is also some trees and such on the platform.
This is what I have so far:
In the "manual gameobject list" or whatever you call it:
Directional light
Main camera
Background (just a sprite that is always showing)
Player (contains a JS script that has character specific code for jumping and such)
Platformspawner (Contains only a c# script. This script spawns multiple platform gameobjects, these gameobjects then use a script called platform.cs. This class spawns monsters and the trees on each platform. The monsters and trees each use their own c# file that keeps track of collisions and such.)
For me this is pretty obvious code and I do not know how to organize it much better. Any tips? I tried following an MVC tutorial but it seems like there is not much gameobject spawning in those tutorials which is what I have to do, so they confuse me.
Your hierarchy (the area of the editor where all of your GameObjects are stored) looks pretty solid. You typically don't need much organization in there if your program is pretty small, but of course putting things into folders (for example, putting multiple canvases into a large Canvas folder) can't hurt.
As for your project, where all of your assets are stored, I believe that folders are paramount for good organization, because if you combine all of your assets it'll be hard to distinguish them. I recommend having base folders for different types of assets (/Scripts/, /Materials/, /Prefabs/, etc.) and storing individual assets in there (and of course adding more folders as you see fit). You should almost never have to have an asset directly in the /Assets/ folder.
Of course, that is only my opinion and my method of organization. Everyone has a different method of managing their project, but what I've described above is the basic system of organization. Happy coding!
I am working on a local multiplayer Unity based game. I want to allow Player 1 to use the keyboard to control their character while Player 2 uses an xbox 360 controller. The issue that I am having is that the controller is only registering as Joystick 1. Here is what I have done so far:
I set up two separate inputs, one polling joystick1 and the keyboard for player 1 with another polling joystick2 (and some alternate keyboard keys) for player 2.
I have the system working with both players on the keyboard, but am unable to get player 1 to use the keyboard while player 2 uses the controller (or visa versa).
My question is, how can I force a controller to become joystick2 or joystick1? Is there a way to control which input is registered to which number?
I'm looking for that solution as well. It seems like Portal2 has managed to get a keyboard+mouse / xbox360controller for local cooperative play, but people have been struggling to get it to work because of this same problem, seems like switching player number in the 360controller is tricky.
Portal2 post on this matter
https://steamcommunity.com/sharedfiles/filedetails/?id=239373369
i recently downloaded "Tiled Map Editor" - because i heard it was a great tool for making maps. I also got a .tmx "compiler", well, something that made the .tmx usable in XNA.
I've created a map and imported it and it worked fine, but now the tricky part comes...
If i add a collision layer in "Tiled" and adds a tile that indicates block part, how would i get data and values, and how would i be able to use it in XNA? And how would i make so that the player spawns in a certain location, and also, how do i add things as events, and movable objects?
You don't have to tell me everything that, but it would be cool if you could give me an idea on how to get data and values from the .tmx and convert it into rectangles or such things^^
Thanks in advance!
I know nothing about tmx file but a little about collision.
I'm going to take a punt that your ".tmx "compiler"" is something that allows files of this type to be included in the content pipeline. Somewhere in this build process will be the vertex data that you can use to construct the collision primitives (shapes) for collision detection later.
ASIDE: it took me ages to get my head around the content pipeline - not for the faint hearted but the way to go. They are samples on the XNA website to get you going