How do I go about inserting another independent camera rig which can be used to navigate using mouse & keyboard in the same scene as the VR player, together with the VR player, with an independent camera view? Does VRTK/SteamVR natively have this feature?
In other words, I'd like to know how to implement asymmetrical local multiplayer, similar to the 'Panoptic' demo.
Yes VRTK_Simulator is the script available:
To test a scene it is often necessary to use the headset to move to a
location. This increases turn-around times and can become cumbersome.
The simulator allows navigating through the scene using the keyboard
instead, without the need to put on the headset. One can then move
around (also through walls) while looking at the monitor and still use
the controllers to interact.
Supported movements are: forward, backward, strafe left, strafe right,
turn left, turn right, up, down.
Related
I'm looking for advice on how to implement online chess move validation using a winboard chess engine. I am creating a mobile battle chess game in Unity 2020.3. Right now I have the possible moves for each player, castle rights, check status, etc being generated by an open source C# chess engine. I chose this engine because it's written in C# and the code could be directly included in my Unity Project. I successfully wrote my own code that connects this chess engine to a 3D battle chess game.
How the game works (video preview):
A piece is selected by the player clicking it with their mouse or
tapping the screen.
The possible moves for the current board are calculated by the chess engine.
I use the possible move indexes to show possible move highlights.
When a player makes one of these possible moves, the move is applied on the chess engine.
It becomes the other player's turn.
I have some custom code to run player animations, based on the kind of move (ex. an attack animation runs when a piece is captured). I also show have screens that show when it's time for a promotion, a player is in check, or the game is over.
I'd like to somehow make the process of calculating possible moves and applying moves to be done on a server. My goal is to protect from players sending illegal moves to the board and breaking the game.
How would you recommend I approach this problem? This is the last thing I need to turn this game into an enjoyable online experience. I just lack the crucial skill of understanding how to make this game online.
On the server, you'll have a separate version of the chess game running.
A player(/client) will send a move to the server (so a unique piece ID, and the target square the player wants to move).
On the server-side, you'll need to check to see if that move is indeed possible. This could be by getting all possible moves from that piece, and checking to see if the intendedChessPosition can be found in the list of possibleChessPositions that was calculated on server-side. Only then, will you actually move the piece server-side.
All players/clients will communicate using the server as the middleman and authority.
The exact implementation could change depending on the API of the chess engine you're using, but that's the basic idea.
I would like to create a shared AR game on Android phones, where I would like to:
spawn a cube for each player on an ImageTarget
allow them to control the position of their cube
allow them to see the movements of all players' cubes
I'm using Vuforia as my AR library and PUN 2 as my networking library. I have no issue synchronizing the positions and rotations of all cubes. However, the cubes do not stay on the ImageTarget properly and "jump" around. On the other hand, if I place my two phones very close together and point them at the ImageTarget at roughly the same angle, the cubes do not jump as much.
This leads me to think that the 2 instances of ARCamera fail to realize that they are pointing at the same ImageTarget from 2 different angles, and instead think that the ImageTarget exists in 2 different orientations at the same time.
Is there any way for me to tell Vuforia that I'm using multiple instances of ARCamera pointing at the same ImageTarget? (Or if my hypothesis is completely wrong, how do I actually make a multiplayer AR game?)
Thanks so much in advance!
p.s. I know the Vuforia forums are a better place to ask this question but unfortunately that forum is not particularly active, so I'm trying my luck here.
I've solved the issue by going to the ARCamera gameobject, then in the Vuforia Behavior component, I changed the World Center Mode from DEVICE to FIRST_TARGET. This allows multiple instances of ARCameras to be in different positions.
More info on World Center Mode can be found here.
When using the unity editor all animations tied to the camera work, but when build and deployed onto the hololens the animation wont process(being overridden by real life coordinates), the way to get around this is by attaching it to a random gameObject. My issue is giving the user the ability to roam around space whilst tying animations and force movements to the cameras parent(empty gameObject).
I have tried making the gameobject stay where the camera is but obviously this just makes for constant motion.
Is there any way to keep the cameras parent exactly where the camera is in order to stop the animations going glitchy?
I'm working on a 2D non-game application. I used TouchScript to have all the multitouch gesture but I have an issue.
In the application, i have the ability to open a lot of popup that are draggable, pinch resizable and we can rotate them.
These popup are made with UIPanel. I add a collider2D on them
The issue is that when 2 popup are overlapping, if i want to move the one on top, i will randomly hit the one on the back or the top one.
It i like the touch goes through the first collider to hit the one behind...
First answering to the comment on your question suggesting to use the UI event system:
If you just use unity's UI events, you won't get advanced gestures such as swipe, pinch, etc and will have to code it yourself.
If you need these gestures, Touchscript is working fine and is a good choice.
Now to your question: I had the same problem and solved it by putting the script "UILayer" on the camera, instead of "CameraLayer2D"
I'm creating a simple game using Unity Studio which uses arrow keys to move the player. Now what I want to do is, use webcam as a movement detecting device and track user's movements and move the player according to them. (For example, when user move his hand to right, webcam can track it and move the player to the right...)
So, is this possible ? If so, what are the techniques APIs I should use for this...?
Thanks!
Have a look at OpenCV, it is being used a lot in the field of body and head tracking, and there's a unity plugin which implements it that might be useful.
Video Demo
It can't. But there is a lot of stuff out there on the internet.
This one has some interesting looking links.
Emgu CV looks interesting too.
There is some JavaScript handtracking tool too.
And of course there's kinect, but you need the 3d sensor.
You could also use LeapMoution.