Anti-Aliasing from Google-VR SDK - c#

I built a lowpoly 3D scene in Unity (use GVR SDK v1.110.0), and then watching on my Pixel phone, the models is full of sawtooth.
Originally I thought it was my model problem, when I create a new empty scene in unity, and then add a cube, then pack on phone to browse, also full of sawtooth (even if I open the 8x Anti-Aliasing!)
I feel the problem comes from the GVR SDK, can anyone answer my question?
p.s. I uploaded a Unity package (including a cube model), and you can use your daydream phone to view it. And I didn't write any code, just finished the basic GVR setup and enable Anti-aliasing, but the sawtooth problem still exists. You can get it in the links below. Thanks.
Test_Demo_Unity_Pakage
Sorry, I overlooked the visual sense of the picture, and you can see the sawtooth from cube edge in the image below.
Sawtooth-Cubes:

I was able to improve some of the jagged edges I was getting with 8x MSAA in Quality settings and setting use 24-bit depth in Player settings under Daydream/Cardboard.

Related

How to fix "Unity transform position changes after build to WebGL"?

This is my first time to make a Unity project. Everything works well on the Game view in Unity Editor. However, after I built the project and played it on Google Chrome, I found that the position of some gameobjects had been changed.
Here is a more detailed explanation. The left is what I saw on the game view, which was perfect. The right side is what I saw after built the project. The position of the grey picture had been changed.
Does anyone know how to fix this?
ps.I am using Unity 2020.3.18 with WebGL platform
Try exporting again but with the same resolution as the game view. In this case you would want to export in a 400x600 resolution (You can also edit the "index.html" file to change and experiment with the resolution). The problem has to do with the fact that changing the resolution of the game can affect where certain objects are and how much you can see.

How to make soft body (jello) objects or arrays? [duplicate]

I am currently using the NVIDIA FleX package in Unity3D to create soft-bodied, jelly objects. I'm using Unity for animation only, not game dev.
What I am aiming to make is a transparent, jello sphere that retains its spherical shape with elasticity.
The first way I've tried to achieve this is using Flex Array + fluid setting. I've been playing with the settings but I can't get it to remain a sphere, it just becomes a more/less viscous fluid blob.
The second way is using the Flex Soft + fluid setting. It is much better in terms of physics but even with "draw particles" off, but the water droplets are each separated and not one jelly sphere.
This is what it looks like before hitting play, where the left is with Flex Array and the right is Flex Soft. The particles for Array are visible but not for Soft.
This is after hitting play, where the Array becomes one viscous fluid, but not a sphere, and the Soft is very jello-like but the water droplets are all separated.
A solution for either of the two ways would be much appreciated!
the standard approach is to create an Nvidia Flex Controller first...
Then you should also create a Flex Soft Asset...
Then you should create or select a game object and through the Add Component tab in the game object's inspector, find the Flex Soft Actor component [see it loaded up in the image below]...
Ensure your Soft Actor Asset mentioned previously has your required mesh type selected in the inspector option [I chose sphere in the image here] and check to see it looks something like the image below to be sure...
So after that, hopefully, you can just press play and see it in action as it drops and contorts for you.
If not, I have created a quick example for you to download as a unitypackage.
It may still require further resolution with the package manager as the Flex plugin is already inside the package I'm providing here[Using Unity 2020.3.5f1]
Flex in unity package
Anyway, hope this gets you started and somewhere towards your goal with Flex.
As a bonus, I've added a small script to move the flex object as this is outside of the usual approach as we have to call to the NVidia Flex component class of choice and invoke the ApplyImpulse method.
Cheers :)
Edit: There are a small 3 set of tutorials from NV Gameworks on integrating the plugin with Unity and exampling some stuff - this "stuff" is included in my downloadable package provided above.
Here is the youtube link to the 3 set:
Nvidia Gameworks FleX tutorials on Youtube
Edit 2: rereading your question made me think I hadnt really given you the definitive answer as to using a cloth actor and having the mesh renderer deform via the flex cloth deform component.
I am providing another link to another unity package here that will show this in action also allowing you to see the game object and how the cloth component from NVIDIA Flex works with the standard mesh filter and mesh renderer. Hope this more accurately answers your question :)
Example also using Cloth Actors as well as Soft Actors in NVIDIA FleX

Getting Movement with Google Cardboard in Unity

I want to put a scene in Unity into virtual reality using Google Cardboard.
I used to be able to just drag a CardboardMain prefab into the scene, delete the main camera, use CardboardMain as the camera position, and CardboardHead to track where the user was looking.
After reading the release notes for the new updates, I thought I could drag a GVREditorEmulator and GVRControllerMain into the scene, and keep the normal camera.
Unfortunately, I can't figure out how to get the camera to follow my character with this new setup. (In this case, a rolling ball.)
If I change the position of the normal camera, it looks like it works fine in Unity, but as soon as I upload it to my phone, the user stays in the same place, while the ball rolls away. (The user can still control the ball's movements, but the camera/user doesn't follow the ball at all.)
I had thought that the chase cam demo would be useful, but that's only for Daydream, and I'm using Cardboard.
This trick seemed to work for some people. I tried in on a previous version of Unity and a previous version of the SDK and it did not seem to work. I may just need to try it on this new version, but I'm worried about going into the released code and editing it, so I'd prefer answers that don't require this.
Is there any way I can get the user to move in a Google Cardboard scene in Unity when I upload it to my iPhone?
UPDATE:
It looks as though my main camera object is not moving, making me think that something is resetting it back to the center every time, lending some more credence to the "trick" earlier. I will now try "the trick" to see if it works.
UPDATE: It doesn't look like the lines listed in the "trick" are there anymore, and the ones that are similar in the new program don't even seem to be running. Still trying to figure out what continues to reset the main camera's position.
UPDATE: Just got a response back from Google on GitHub (or at least someone working on the SDK) that says "You just need to make the node that follows the player a parent of the camera, not the same game object as the camera." I'm not exactly sure what that means, so if someone could explain that, that would most likely solve my problem. If I figure it out on my own I'll post back here.
UPDATE: Zarko Ristic posted an answer that explained what this was, but unfortunately the tracking is still off. I found out how to get Google Cardboard to work with the old SDK and posted an answer on that. Still looking for ways to get the new SDK to follow a character properly.
You can't change positioin of camera in cardboard application, position of MainCamera it always must be 0,0,0. But you can just simply made empty GameObject and make it parent of MainCamera. In Cardboard games actualy you can move parent of camera instead MainCamera directly.
Add script for tracking ball to MainCamera parent (GameObject).
This does not answer my question, but is a solution to the problem.
Do not use the newest version of the SDK. Use version 0.6
When building the app in Unity, do not select to have VR enabled under the build settings. (VR will still be enabled in the app.) (Credit: Zarko Ristic)
When putting the app onto your phone, if XCode prompts you to change any settings, you can ignore it.
In addition, disable bitcode under "Build Settings -> Enable Bitcode -> No" (Currently, this will not allow you to put your app onto the app store. I would greatly appreciate it if anyone has information on how to get it to run without doing this.)
Now your app should run on your phone correctly.

Vuforia 6.2.2 Unity 5.4.4 Camera does not work

Everyone.
Vuforia Camera does not work, but only black screen on Android 6.0+ versions.
Vuforia Version is 6.2.2 and Unity version is 5.4.4
But vuforia camera works on less than android 5.0
How can I fix this issues?
I hope you to teach me about this.
This is an interesting problem I have and there is an unacceptable temporary fix I am using while testing this system out. For the iPhone 7, to get past this suspend the app and then return to it. After about 2 seconds the camera will work. I am guessing it will work similar for android. I will update with a better fix if I come by a real fix after my testing if I decide to use this system.
Edit:
Short answer:
Delete the meta data for any pre-existing custom camera controller script. If you are using your own camera controller you need to disable the vuforia one and delete the meta for that. You are basically hijacking the camera feed after it starts.
long: I began this application by building out my own system and to test vuforia I disabled these items (like the camera feed). I went through the logs and saw that even with these items disabled the camera feed was still running and this feed started AFTER the vuforia camera so basically my own start() methods (even though they were disabled) were hijacking the camera from vuforia. It turns out that my meta data for my camera controller script was still poised to run the script even though everything was disabled. After deleting my camera controller script meta data it worked fine. You can also just delete the camera controller and it will delete the meta. By camera controller I mean my custom written camera controller that was built before I added in vuforia. This is a hard to find fix because it works fine in unity, but not when you build out to a device. The meta doesn't seem to update for the device, just the unity engine.
If you are using the vuforia camera make sure you either use the vuforia plane that comes as a child of the camera or delete the meta data to whatever camera script you wrote. You should get camera feed in a new empty project just by dropping in the vuforia camera, there is no need to build your own script and if you do, make sure one isn't overriding the other like mine was.
If you want to simply test to make sure it isn't your device or code create a NEW empty unity project, import vuforia (no need to import the database, just the sdk) and then drop the vuforia camera into the project and test it. Don't add anything extra or do any image recognition. If this works, it's your code somewhere.
I am using Vuforia 6.2.10, Unity 5.4.4.f(64 bit) and a Nexus 7 Android 6. I had the same issue where the camera was black. I started over, adding one component at a time. The AR Camera alone worked fine. Adding a Target Image also worked. I added a plane and image to the TI and the camera failed to work. Setting the image texture type to Sprite 2d and ui seemed to help.
I discovered that by deleting the app from the device and creating a apk file each time it worked.
I am not sure I'd count on Vuforia in the near term.

Need to use Tiled Map editor with Unity 4.3.4

I shifted to unity few weeks ago. I am developing a 2D platformer. For creating the maps I am using Tiled map editor from www.mapeditor.org . I have created a basic map. Included the tileSheet png and the .tmx file (saved as XML) in the Assets of the project. I am able to read the XML , that is all the gid's. But I don't know how to access a particular portion(tile) from the tileSheet corresponding to a gid.
I think for this I need to load sprite in the memory and select a tile (by specifying Height and width and coords) from texture memory to display it on screen. As given here :http://gamedevelopment.tutsplus.com/tutorials/parsing-and-rendering-tiled-tmx-format-maps-in-your-own-game-engine--gamedev-3104
but its for flash , how I can achieve same thing in Unity using C#. Notice the copyPixel stuff in the flash code. I thought I could use ReadPixels but it is used for reading from screen only not the texture memory.
Thanks.
If you're working in Windows then the Tiled2Unity Utility sounds like it will fit your needs. It exports Object Layers and was made with Unity 4.3 features in mind.
(Full disclosure: I'm the author of Tiled2Unity)
EDIT: Tiled2Unity is available for Mac users as well now. There is a command-line version for Linux users. (all free)
If you can describe more carefully your problem and what you are trying to do, maybe myself or someone can help you better, for example what exactly do you mean by "load a sprite into memory"? Or "select a tile"? Copying pixel data is SLOWWW, and hopefully you don't mean to be doing this in real time.
Here is my real advice though:
Have you checked out UTiled? It does tiled maps in 2D in Unity so I think it already does what you want and it's free.
There is also UniTMX... free.
There is also 'Tiled Tilemaps'... which is like $2.
I also built a system that can also do what I think you are trying to do (your link is broken, so I can't be sure).
The system I built is called 'Tiled to Unity' (you can search it in youtube to see if it does what you want). It allows you to attach gameObjects to tiles and have tile variants, and can do 3D tiles.
Anyway, trying to roll your own pipeline from Tiled into Unity is a ton of work, and with these tools available, I think it is almost certainly unnecessary... That's just imo.

Categories