I would like to create a shared AR game on Android phones, where I would like to:
spawn a cube for each player on an ImageTarget
allow them to control the position of their cube
allow them to see the movements of all players' cubes
I'm using Vuforia as my AR library and PUN 2 as my networking library. I have no issue synchronizing the positions and rotations of all cubes. However, the cubes do not stay on the ImageTarget properly and "jump" around. On the other hand, if I place my two phones very close together and point them at the ImageTarget at roughly the same angle, the cubes do not jump as much.
This leads me to think that the 2 instances of ARCamera fail to realize that they are pointing at the same ImageTarget from 2 different angles, and instead think that the ImageTarget exists in 2 different orientations at the same time.
Is there any way for me to tell Vuforia that I'm using multiple instances of ARCamera pointing at the same ImageTarget? (Or if my hypothesis is completely wrong, how do I actually make a multiplayer AR game?)
Thanks so much in advance!
p.s. I know the Vuforia forums are a better place to ask this question but unfortunately that forum is not particularly active, so I'm trying my luck here.
I've solved the issue by going to the ARCamera gameobject, then in the Vuforia Behavior component, I changed the World Center Mode from DEVICE to FIRST_TARGET. This allows multiple instances of ARCameras to be in different positions.
More info on World Center Mode can be found here.
Related
Currently I'm working on a 2D game with destructible terrain in unity. It works great! Except for one thing.... collision generation... Since the terrain is destructible, I have to generate collisions on the go. I tried and it's awful performance. From 1000 fps from editing the terrain to 1 fps when I enable collisions generation. This is a huge issue and I know it's possible within unity because this guy :
https://forum.unity.com/threads/wip-nimbatus.221798/
created it with collisions as well. I tried contacting him but no response yet! Any of you guys have any ides on what I can do? Thanks!
I found a rather annoying but working solution to the issue. So basically what was wrong was I was setting points as vertices. The guy I messaged told me that at the time when he made his game, the 2D engine wasn't a thing in unity so he was forced to use regular 3D components. I didn't want to do this because it would possibly limit what I can do with 2D packages since I am not using the 2D engine. But in the end I decided that unless I could find a extremely fast edge finding algorithm that supported concave meshes WITH holes, I would need to just use the regular Unity3D MeshCollider component. I'm not sure why I was having an issue with the Polygon collider as I thought it would work the same as the MeshCollider but in 2D. I think the issue was I was setting the path of the Polygon Collider as each vertice. This is wrong because I think I would need to set the edge vertices only. (My mesh does not share vertices so I'm not sure how to find the edges) Of course I could re write it to share vertices but I think I will be find with the 3D components (For now at least)
I am building an app for the HoloLens gen 1 device using Unity 2018.3.13f and MRTK V2 RC1. I got a simple AR design with 2 text objects and 1 rawimage object. After building the project and deploying it to the HoloLens the AR objects ends up behind the spatial mesh (you know all those spatial triangles), but I want all the objects to be in front of the wall.
How do I accomplish this?
The canvas is set to be on the main camera
I have the original settings for the DefaultMixedRealityconfiguraitonProfile if there is something there that needs to be changed.
This is how it looks through the hololens with the app when it does not show the mesh of the wall (sorry for the bad quality)
and this is how it looks when it falls behind the mesh
Do I need to add some mesh renderer or something on the MainCamera to make this possible?
Any help is appreciated, thanks!
I don't believe that the MRTKv2 as of 2019/5/9 has code that will auto-ensure that a specific object is positioned in between the camera and other arbitrary meshes and colliders (i.e. the spatial awareness is one such mesh, though you could imagine just having an arbitrary box or plane in the scene that would occlude that object, in which case, maybe you'd want your "in between" object to stay in between both those two types of potentially occluding things).
There used to be a script in the HTK called Tagalong.cs that would do something like this by doing raycasts from the camera to collidable object:
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/htk_release/Assets/HoloToolkit/Utilities/Scripts/Tagalong.cs
This single large script I think got broken up into smaller scripts (i.e. specific behaviors in the solvers here:)
https://github.com/microsoft/MixedRealityToolkit-Unity/tree/mrtk_release/Assets/MixedRealityToolkit.SDK/Features/Utilities/Solvers
However, from what I can tell, the specific interaction of "keep things automatically between the camera and whatever collidable object" wasn't preserved. Someone else can correct me here if I'm wrong, it looks like this wasn't a behavior that got preserved in V2.
Going forward, there are a couple of possibilities:
1) Probably file an issue on Github here (https://github.com/microsoft/MixedRealityToolkit-Unity/issues) to request this feature be ported over.
2) Use the code in Tagalong.cs to add your own solver that would accomplish this (i.e. the code looks to be all there, there's just some work needed to get done to reorder it to handle what you want)
If you use a sprite renderer, set order in layer (into 0 or -1).
If you use a mesh renderer, try to deactivate dynamic occluded.
Try to change the hierarchy of the sorting layers under Edit-> Project Settings -> Sorting Layers
with Unity 5.6 the NavMesh / Navigation has changed in some parts. My Question is:
Is it now possible to create a NavMesh, that covers a whole sphere ?
Thank you for your Answers!
I think it would theorical work, but there is no feature (yet, maybe) that would do that for you. You'd have to make for every mesh a seperate navmash and then link them together accordingly to their position.
If you did make a spherical navmesh, you could make a video or gif about it.
Here is my spherical navmesh on complex topology: youtube
In short - I baked my planet 6 times, rotating different sides up. Then I manually added baked navmesh pieces, setting it`s rotation. And then connected it with off mesh links (sure, it must be more accurate for production).
Anyone know how to use Goblin XNA to implement ball control in the same way as one might find in the board game labyrinth?
There don't appear to be any tutorials or information at all regarding how to do this, despite there being a demo video displaying just such a thing.
I've setup the environment and gravity and added the ground and a sphere. I use WorldTransformation.Decompose to extract the current orientation of the board. I know the next step will be either ApplyLinearVelocity or AddForce to the sphere based on the current board orientation, but I don't know how to constantly apply these methods to the ball so that the ball is moving in response to the movement of the ball. Adding code to the Draw or Update methods only executes the code a single time. Anyone familiar with Goblin XNA at all and able to help?
As far as I can see in Goblin XNA the Update and Draw methods are called the same way as standard XNA games. Can you give more specific information? source code perhaps?
For our game we have to create collision detection.
The problem is that the collided objects are in different canvasses/layers, which makes collision detection by pointlocation inpossible.
Does anyone have an idea how to solve this?
It's hard to give a great answer without some more information, but if all your layers are the same size then you can just roll your own collision detection. All you need to know is the locations and sizes of two things to be collision detected. Then you just test to see if one rectangle intersects with the other rectangle.
There is also a function that might be useful to use called TranslatePoint. This translates from one UIElements coordinates to another. So if you had a ball bouncing around in a smaller area of the screen with it's own local coordinate system, you could get the ball's coordinates relative to the entire screen with this function.
Can I suggest you try using the Farseer physics engine just to save yourself some pain?
http://farseerphysics.codeplex.com/
It's very good and is in use in some WP7 games already.
http://www.farseergames.com/
There's also some Blend behaviors and helpers to make using it even easier:
http://physicshelper.codeplex.com/