Unity 3D: Camera Background Color not applying - c#

in my 3D Google Cardboard VR mini game, before switching to another scene, I'd like to fade the current scene's background to white for a nice transition effect.
I built a function which changes the color value from yellow to white whithin 2 seconds:
within Update ():
if (started) {
if (startTime >= startDelay) {
//start
} else {
//fade
thisBrightness = startTime / 2; // runs 2 seconds
if (thisBrightness > 1) {
thisBrightness = 1; // just in case
}
Camera.main.backgroundColor = Color.Lerp (mainCameraBackground, mainCameraFaded, thisBrightness);
startTime += Time.deltaTime;
}
}
I logged the float "thisBrightness" and it changes from 0 to 1 as it should. Also, I can see in the inspector that the color field in Camera > Background changes, but in my Game Preview, it does NOT - the color stays.
Anybody has any explanation and solution for this??!
1000 thanks!
Felix
Unity 5.5.0f3 personal
Google Cardboard 1.0

Edit: I just came back to this question and found it's not really answered.
I found out that main camera is converted to separate cameras left + right by Google VR SDK.
You'll need to handle both separately, see code below for example how I resoved this in the end:
public Camera leftCamera;
public Camera rightCamera;
mainCameraBackground = new Color (1, 0.8f, 0); // set to yellow initially
mainCameraFaded = new Color(1f,1f,1f);
mainCameraCurrent = new Color (0f, 0f, 0f);
// main camera is converted to left + right by Google VR SDK.
// this is why we need to handle both separately
leftCamera.clearFlags = CameraClearFlags.SolidColor;
leftCamera.backgroundColor = mainCameraBackground;
rightCamera.clearFlags = CameraClearFlags.SolidColor;
rightCamera.backgroundColor = mainCameraBackground;
and then:
mainCameraCurrent = Color.Lerp (mainCameraBackground, mainCameraFaded, thisBrightness);
rightCamera.backgroundColor = mainCameraCurrent;
leftCamera.backgroundColor = mainCameraCurrent;

Related

Unity: screen space vs window space vs monitor space?

I am trying to take a Ui object's screen space position and translate that to what I am calling 'monitor space'.
As far as I can tell, screen space, in Unity, is relative to the applications' window. That is, even if the app is not full screen, and moved around on your monitor, 0,0 will still be the lower left of the app window.
I need to translate one of those screen space values into the actual position within the user's monitor. This is especially important when considering that the user might have multiple monitors.
I am not finding anything to get this done, though.
I am hoping to find a platform agnostic solution, but if it must be Windows-only than I can make that work as well.
Any help on this would be greatly appreciated.
Thank you
Now after TEEBQNE's answer I also wanted to give it a shot using the native solution.
As mentioned this will be only for Windows PC Standalone and requires
Unity's new Input System (see Quick Start)
One of the solutions from Getting mouse position in c#
For example if you want to use System.Windows.Forms then copy the according DLL from
C:\Windows\Microsoft.NET\Framework64\v4.x.xx
into your project under Assets/Plugins
Then in code you can use
using System.Windows.Forms;
If this is more efficient (or even works this way) I can't tell - only on the phone here - but I hope the idea gets clear ;)
So the idea is:
store initial cursor position
Set your cursor to certain positions of interest using WarpCursorPosition using Unity screen coordinates as input
read out the resulting absolute monitor coordinates using the native stuff
in the end reset the cursor to the original position
This might look somewhat like
using UnityEngine;
using UnityEngine.InputSystem;
public static class MonitorUtils
{
// Store reference to main Camera (Camera.main is expensive)
private static Camera _mainCamera;
// persistent array to fetch rect corners
// cheaper then everytime creating and throwing away a new array
// especially when fetching them every frame
private static readonly Vector3[] corners = new Vector3[4];
// For getting the UI rect corners in Monitor pixel coordinates
public static void GetMonitorRectCorners(this RectTransform rectTransform, Vector2Int[] output, bool isScreenSpaceCanvas = true, Camera camera = null)
{
// Lazy initialization of optional parameter
if (!camera) camera = GetMainCamera();
// Store initial mouse position
var originalMousePosition = Mouse.current.position.ReadValue();
// Get the four world space positions of your RectTtansform's corners
// in the order bottom left, top left, top right, bottom right
// See https://docs.unity3d.com/ScriptReference/RectTransform.GetWorldCorners.html
rectTransform.GetWorldCorners(corners);
// Iterate the four corners
for (var i = 0; i < 4; i++)
{
if (!isScreenSpaceCanvas)
{
// Get the monitor position from the world position (see below)
output[i] = WorldToMonitorPoint(corners[i], camera);
}
else
{
// Get the monitor position from the screen position (see below)
output[i] = ScreenToMonitorPoint(corners[i], camera);
}
}
// Restore mouse position
Mouse.current.WarpCursorPosition(originalMousePosition);
}
// For getting a single Unity world space position in Monitor pixel coordinates
public static Vector2Int WorldToMonitorPoint(Vector3 worldPoint, Camera camera = null)
{
// Lazy initialization of optional parameter
if (!camera) camera = GetMainCamera();
var screenPos = camera.WorldToScreenPoint(worldPoint);
return ScreenToMonitorPoint(screenPos, camera);
}
// For getting a single Unity world space position in Monitor pixel coordinates
public static Vector2Int ScreenToMonitorPoint(Vector3 screenPos, Camera camera = null)
{
// Lazy initialization of optional parameter
if (!camera) camera = GetMainCamera();
// Set the system cursor position there based on Unity screen space
Mouse.current.WarpCursorPosition(screenPos);
// Then get the actual system mouse position (see below)
return GetSystemMousePosition();
}
// Get and store the main camera
private static Camera GetMainCamera()
{
if (!_mainCamera) _mainCamera = Camera.main;
return _mainCamera;
}
// Convert the system mouse position to Vector2Int for working
// with it in Unity
private static Vector2Int GetSystemMousePosition()
{
var point = System.Windows.Forms.Cursor.Position;
return new Vector2Int(point.X, point.Y);
}
}
So you can either simply use
var monitorPosition = MonitorUtils.WorldToMonitorPoint(someUnityWorldPosition);
// or if you already have the `Camera` reference
//var monitorPosition = MonitorUtils.WorldToMonitorPoint(someUnityWorldPosition, someCamera);
or if you already have a screen space position like e.g. in a ScreenSpace Overlay canvas
var monitorPosition = MonitorUtils.ScreenToMonitorPoint(someUnityWorldPosition);
// or if you already have the `Camera` reference
//var monitorPosition = MonitorUtils.ScreenToMonitorPoint(someUnityWorldPosition, someCamera);
or you can get all four corners of a UI element at once using e.g.
var monitorCorners = new Vector2Int [4];
someRectTransform.GetMonitorRectCorners(monitorCorners, isScreenSpaceCanvas);
// or again if you already have a camera reference
//someRectTransform.GetMonitorRectCorners(monitorCorners, isScreenSpaceCanvas, someCamera);
Little example
public class Example : MonoBehaviour
{
[Header("References")]
[SerializeField] private Camera mainCamera;
[SerializeField] private RectTransform _rectTransform;
[SerializeField] private Canvas _canvas;
[Header("Debugging")]
[SerializeField] private bool isScreenSpace;
[Header("Output")]
[SerializeField] private Vector2Int bottomLeft;
[SerializeField] private Vector2Int topLeft;
[SerializeField] private Vector2Int topRight;
[SerializeField] private Vector2Int bottomRight;
private readonly Vector2Int[] _monitorPixelCornerCoordinates = new Vector2Int[4];
private void Awake()
{
if (!mainCamera) mainCamera = Camera.main;
if (!_canvas) _canvas = GetComponentInParent<Canvas>();
isScreenSpace = _canvas.renderMode == RenderMode.ScreenSpaceOverlay;
}
private void Update()
{
if (Keyboard.current.spaceKey.isPressed)
{
_rectTransform.GetMonitorRectCorners(_monitorPixelCornerCoordinates, isScreenSpace);
bottomLeft = _monitorPixelCornerCoordinates[0];
topLeft = _monitorPixelCornerCoordinates[1];
topRight = _monitorPixelCornerCoordinates[2];
bottomRight = _monitorPixelCornerCoordinates[3];
}
}
}
You will see that moving your mouse each and every frame isn't a good idea though ^^
Now you can see the four corners being updated depending on the actual position on the screen.
Note: while Unity Screenspace is 0,0 at the bottom left in normal display pixels 0,0 is actually rather top-left. So you might need to invert these.
Alright first off - sorry for the late response just got back and was able to type up an answer.
From what I have found, this solution does not work in the editor and produces odd results on Mac with retina display. In the editor, the Screen and Display spaces appear to be exactly the same. There is probably a solution to fix this but I did not look into the specifics. As for Mac, for whatever reason, the internal resolution outputted is always half the actual resolution. I am not sure if this is just a retina display bug with Unity or a general Mac bug. I tested and ran this test script on both a Windows computer and Mac with a retina display. I have yet to test it on any mobile platform.
I do not know exactly what you would like to achieve with the values you wish to find, so I set up a demo scene displays the values instead of using them.
Here is the demo script:
using UnityEngine;
using System.Collections.Generic;
using UnityEngine.UI;
public class TestScript : MonoBehaviour
{
[SerializeField] private RectTransform rect = null;
[SerializeField] private List<Text> text = new List<Text>();
[SerializeField] private Canvas parentCanvas = null;
[SerializeField] private Camera mainCam = null;
private void Start()
{
// determine the canvas mode of our UI object
if (parentCanvas == null)
parentCanvas = GetComponentInParent<Canvas>();
// only need a camera in the case of camera space canvas
if (parentCanvas.renderMode == RenderMode.ScreenSpaceCamera && mainCam == null)
mainCam = Camera.main;
// generate initial data points
GenerateData();
}
/// <summary>
/// Onclick of our button to test generating data when the object moves
/// </summary>
public void GenerateData()
{
// the anchored position is relative to screen space if the canvas is an overlay - if not, it will need to be converted to screen space based on our camera
Vector3 screenPos = parentCanvas.renderMode == RenderMode.ScreenSpaceCamera ? mainCam.WorldToScreenPoint(transform.position) : rect.transform.position;
// our object relative to screen position
text[0].text = "Screen Pos: " + screenPos;
// the dimensions of our screen (The current window that is rendering our game)
text[1].text = "Screen dimensions: " + Screen.width + " " + Screen.height;
// find our width / height normalized relative to the screen space dimensions
float x = Mathf.Clamp01(screenPos.x / Screen.width);
float y = Mathf.Clamp01(screenPos.y / Screen.height);
// our normalized screen positions
text[2].text = "Normalized Screen Pos: " + x + " " + y;
// grab the dimensions of the main renderer - the current monitor our game is rendered on
#if UNITY_STANDALONE_OSX
text[3].text = "Display dimensions: " + (Display.main.systemWidth * 2f) + " " + (Display.main.systemHeight * 2f);
// now find the coordinates our the UI object transcribed from screen space normalized coordinates to our monitor / resolution coordinates
text[4].text = "Display relative pos: " + (Display.main.systemWidth * x * 2f) + " " + (Display.main.systemHeight * y * 2f);
#else
text[3].text = "Display dimensions: " + Display.main.systemWidth + " " + Display.main.systemHeight;
// now find the coordinates our the UI object transcribed from screen space normalized coordinates to our monitor / resolution coordinates
text[4].text = "Display relative pos: " + (Display.main.systemWidth * x) + " " + (Display.main.systemHeight * y);
#endif
}
/// <summary>
/// Just for debugging - can be deleted
/// </summary>
private void Update()
{
if (Input.GetKey(KeyCode.A))
{
rect.anchoredPosition += new Vector2(-10f, 0f);
}
if (Input.GetKey(KeyCode.W))
{
rect.anchoredPosition += new Vector2(0f, 10f);
}
if (Input.GetKey(KeyCode.S))
{
rect.anchoredPosition += new Vector2(0f, -10f);
}
if (Input.GetKey(KeyCode.D))
{
rect.anchoredPosition += new Vector2(10f, 0f);
}
}
}
I accounted for the parent canvas being either Overlay or Camera mode and put in a check for an OSX build to adjust to the proper screen dimensions.
Here is a gif of the build on OSX. I set the window to be 1680x1050 and my computer's current resolution is 2880x1800. I had also test it on Windows but did not record it as the example looks nearly identical.
Let me know if you have more questions about the implementation or if there are issues with other platforms I did not test.
Edit: Just realized you want the screen space coordinate relative to the monitor space. I will correct the snippet in a little bit - in a meeting right now.
Edit2: After a bit more looking, it will not be easy to get the exact coordinates without the window being centered or getting the standalone window's position. I do not believe there is an easy way to get this information without a dll, so here is a implementation for mac and a solution for windows.
Currently, the solution I have will only get the screen position if the standalone player is windowed and centered on your screen. If the player is centered on the screen, I know that the center of my monitor is half the dimensions of its resolution, and know that the center point of my window matches up to this point. I can now get the bottom left corner of my window relative to my monitor and not a (0,0) coordinate. As the screen space has the bottom left corner at (0,0), you can now adjust the position to monitor space by adding the position of the newly calculated bottom left position.
Here is the new new GenerateData method:
/// <summary>
/// Onclick of our button to test generating data when the object moves
/// </summary>
public void GenerateData()
{
// the anchored position is relative to screen space if the canvas is an overlay - if not, it will need to be converted to screen space based on our camera
Vector3 screenPos = parentCanvas.renderMode == RenderMode.ScreenSpaceCamera ? mainCam.WorldToScreenPoint(transform.position) : rect.transform.position;
// grab the display dimensions
Vector2 displayDimensions;
// bug or something with mac or retina display on mac where the main.system dimensions are half of what they actually are
#if UNITY_STANDALONE_OSX || UNITY_EDITOR_OSX
displayDimensions = new Vector2(Display.main.systemWidth * 2f, Display.main.systemHeight * 2f);
#else
displayDimensions = new Vector2(Display.main.systemWidth, Display.main.systemHeight);
#endif
// the centerpoint of our display coordinates
Vector2 displayCenter = new Vector2(displayDimensions.x / 2f, displayDimensions.y / 2f);
// half our screen dimensions to find our screen space relative to monitor space
Vector2 screenDimensionsHalf = new Vector2(Screen.width / 2f, Screen.height / 2f);
// find the corners of our window relative to the monitor space
Vector2[] displayCorners = new Vector2[] {
new Vector2(displayCenter.x - screenDimensionsHalf.x, displayCenter.y - screenDimensionsHalf.y), // bottom left
new Vector2(displayCenter.x - screenDimensionsHalf.x, displayCenter.y + screenDimensionsHalf.y), // top left
new Vector2(displayCenter.x + screenDimensionsHalf.x, displayCenter.y + screenDimensionsHalf.y), // top right
new Vector2(displayCenter.x + screenDimensionsHalf.x, displayCenter.y - screenDimensionsHalf.y) // bottom right
};
for (int z = 0; z < 4; ++z)
{
text[z].text = displayCorners[z].ToString();
}
// outputting our screen position relative to our monitor
text[4].text = (new Vector2(screenPos.x, screenPos.y) + displayCorners[0]).ToString();
}
Once you are able to either get or set the windowed screen, you can properly re-orient the lower-left corner relative to the monitor dimensions or you can set the window back to the center point of your monitor. The above snippet would also work for a full-screen player. You would just need to determine how far off the aspect ratio of the player window is to your monitor, which allows you to find how large the black bars would be on the edges.
I assumed what you had wanted was straightforward but from what I can tell an OS-agnostic solution would be difficult. My above solution should work for any platform when the player is windowed if you can either get or set the standalone window position and for any platform that is full-screened with the theoretical approach I mentioned.
If you want more info on how to adjust the implementation for the full-screened window let me know.

Disable/Toggle visualization of tracked planes in ARCore unity

I have been looking on the code for ARCore Unity for a while and I want to do one simple task that is, to have a toggle button so user can place an object in the scene while knowing where to place it while the tracked planes are visible and once the user places the object, he is given the option of just visually disabling the tracked planes so it looks more realistic. I was able to do this in Android Studio by doing something like this in the main HelloArActivity.java:
if (planeToggle) {
mPlaneRenderer.drawPlanes(mSession.getAllPlanes(), frame.getPose(), projmtx);
}
this was really simple. I made a bool named planeToggle and just placed the mPlaneRenderer.drawPlanes function in an if condition. When the bool is true it displays the planes and when its false, it does not...
However, with Unity I am confused. I did something like this in the HelloARController.cs :
I made a button to togglePlanes.
Set an event listener to it to toggle a boolean variable and did something like this :
for (int i = 0; i < m_newPlanes.Count; i++)
{
// Instantiate a plane visualization prefab and set it to track the new plane. The transform is set to
// the origin with an identity rotation since the mesh for our prefab is updated in Unity World
// coordinates.
GameObject planeObject = Instantiate(m_trackedPlanePrefab, Vector3.zero, Quaternion.identity,
transform);
planeObject.GetComponent<TrackedPlaneVisualizer>().SetTrackedPlane(m_newPlanes[i]);
m_planeColors[0].a = 0;
// Apply a random color and grid rotation.
planeObject.GetComponent<Renderer>().material.SetColor("_GridColor", m_planeColors[0]);
planeObject.GetComponent<Renderer>().material.SetFloat("_UvRotation", Random.Range(0.0f, 360.0f));
if (togglePlanes == false){ // my code
planeObject.SetActive(false); // my code
} //
}
Nothing happens when I press the toggle button.
The other option I had was to make changes in the TrackedPlaneVisualizer.cs where I did something like this :
for (int i = 0; i < planePolygonCount; ++i)
{
Vector3 v = m_meshVertices[i];
// Vector from plane center to current point
Vector3 d = v - planeCenter;
float scale = 1.0f - Mathf.Min((FEATHER_LENGTH / d.magnitude), FEATHER_SCALE);
m_meshVertices.Add(scale * d + planeCenter);
if (togglePlanesbool == true) // my code
{
m_meshColors.Add(new Color(0.0f, 0.0f, 0.0f, 1.0f)); // my code
}
else
{
m_meshColors.Add(new Color(0.0f, 0.0f, 0.0f, 0.0f)); // my code
}
}
This did work. But I am experiencing delays in toggling and sometimes if two different planes have been rendered they start toggling between themselves(if one is enabled, other gets disabled). So I guess this is also not the option to go for....Anyone who can help?
Note that I am a beginner in Unity.
The sample isn't really designed to hide and show the planes, so you have to add a couple things.
First, there is no collection of the GameObjects that represent the ARCore planes. The easiest way to do this is to add a tag to the game objects:
In the Unity editor, find the TrackedPlaneVisualizer prefab and select it. Then in the property inspector, drop down the Tag dropdown and add a tag named plane.
Next, in the Toggle handler method, you need to find all the game objects with the "plane" tag. Then get both the Renderer and TrackedPlaneVisualizer components and enable or disable them based on the toggle. You need to do both components; the Renderer draws the plane, and the TrackedPlaneVisualizer re-enables the Renderer (ideally it would honor the Renderer's state).
public void OnTogglePlanes(bool flag) {
showPlanes = flag;
foreach (GameObject plane in GameObject.FindGameObjectsWithTag ("plane")) {
Renderer r = plane.GetComponent<Renderer> ();
TrackedPlaneVisualizer t = plane.GetComponent<TrackedPlaneVisualizer>();
r.enabled = flag;
t.enabled = flag;
}
}
You can also do a similar thing where the GameObject is instantiated, so new planes honor the toggle.

Screen fade not working in Daydream VR since I migrated to Unity 5.6

I'm building a DayDream VR game. I've previously had a script to fade out the screen when the user is clicking somewhere to change levels/scenes.
Since I've migrated to Unity 5.6 / Google VR SDK 1.2, any fading effect stopped working. But it still works in Preview mode on my desktop. This is because they changed the way the Camera works. I've tried different scripts online but none of them work, would anyone have an idea on how to do a screen fade on scene change please?
Here is the current main part of the code:
// Derived from OVRScreenFade
float elapsedTime = 0.0f;
Color color = fadeColor;
color.a = 0.0f;
fadeMaterial.color = color;
while (elapsedTime < fadeTime)
{
yield return new WaitForEndOfFrame();
elapsedTime += Time.deltaTime;
color.a = Mathf.Clamp01(elapsedTime / fadeTime);
fadeMaterial.color = color;
}
I've also attempted to use Autofade script. As I mentioned they all work when tried the game on my desktop, they just don't work on the Android phone :(.
Any idea why please?
EDIT: Here is some extra code
public Material fadeMaterial = null; //starts NULL
//applied to cameras inside a function
foreach (Camera c in Camera.allCameras)
{
var fadeControl = c.gameObject.AddComponent<ScreenFadeControl>();
fadeControl.fadeMaterial = fadeMaterial;
fadeControls.Add(fadeControl);
}
FINAL SOLUTION
Using the answer given here, I've created a script file with instructions, feel free to download it and use it if you need the same thing:
https://gist.github.com/xtrimsky/0d58ee4db1964577893353365903b91a
If you want to fade the entire screen I suggest:
In Unity add a Panel (make sure that it covers the whole screen).
Make a new Material and attach it to the Panel.
The Material should have Rendering Mode set to Transparent.
Attach the following Script to the Panel.
public Material m;
public float _colorSpeed = 0.01f;
private Color c;
private bool start = false;
void Update()
{
if (start)
{
if (c.a < 1.0f)
c.a = c.a + _colorSpeed;
m.color = c;
}
}
public void Fade()
{
c = m.color;
c.a = 0.0f;
start = true;
}
Call the Fade() method when necessary.
Hope that solves your problem!

How to access the main Camera in Unity to make a zoom script?

I am trying to make a simple zoom script for when you click on a cube. I want it to zoom in on the cube, but I cannot find a way to make the main camera zoom for me. I have tried several different ways. Here is the current one. I had it in a OnMouseDown, but it still would not work, so I moved it to update to see if I could get it to work.
void Update ()
{
if(Input.GetKeyDown("z"))
{
Debug.Log("Pressed Z");
zoomedIn = !zoomedIn;
}
if(zoomedIn == true)
{
Debug.Log("True!");
Camera.main.GetComponent<Camera>().fieldOfView = Mathf.Lerp(GetComponent<Camera>().fieldOfView, zoom, Time.deltaTime*smooth);
}
else
{
Camera.main.GetComponent<Camera>().fieldOfView = Mathf.Lerp(GetComponent<Camera>().fieldOfView, normal, Time.deltaTime*smooth);
}
}
Looks like zoom and normal are not assigned to the correct values. Also make sure that you're in Perspective, not Orthographic view.
If you want to use Orthographic view just change all usages of fieldOfView to orthographicSize and change zoom to something reasonable, like 5 units.
normal should be the initial fieldOfView of the camera, retrieved in Start:
// camera is a private field
private Camera camera;
camera = GetComponent<Camera>();
normal = camera.fieldOfView;
zoom should be a value less than normal (initial fieldOfView) assigned from the inspector to be able to "zoom in".
Your conditional branch will change to
if (zoomedIn) // Same as if (zoomedIn == true)
{
camera.fieldOfView = Mathf.Lerp(camera.fieldOfView, zoom, Time.deltaTime * smooth);
}
else
{
camera.fieldOfView = Mathf.Lerp(camera.fieldOfView, normal, Time.deltaTime * smooth);
}
Or, a more concise version:
camera.fieldOfView = Mathf.Lerp(camera.fieldOfView, zoomedIn ? zoom : normal, Time.deltaTime * smooth);
I also suggest using a Coroutine to do this instead of doing this in Update.

Fade in/out between scenes is not working in Unity - Google Cardboard plugin

I'm developing an application in Unity with the Google CardbBoard Plugin, and I tried to fade in/out the screen when passing between scenes, I've worked with this example drawing a texture in the GUI object:
GUI.color = new Color (GUI.color.r, GUI.color.g, GUI.color.b, alpha);
Texture2D myTex;
myTex = new Texture2D (1, 1);
myTex.SetPixel (0, 0, fadeColor);
myTex.Apply ();
GUI.DrawTexture (new Rect (0, 0, Screen.width, Screen.height), myTex);
if (isFadeIn)
alpha = Mathf.Lerp (alpha, -0.1f, fadeDamp * Time.deltaTime);
else
alpha = Mathf.Lerp (alpha, 1.1f, fadeDamp * Time.deltaTime);
if (alpha >= 1 && !isFadeIn) {
Application.LoadLevel (fadeScene);
DontDestroyOnLoad(gameObject);
} else if (alpha <= 0 && isFadeIn) {
Destroy(gameObject);
}
The code I worked with is from this page: Video Tutorial, Example downloads, and it worked fine in a Unity game without the Cardboard plugin, but in my current project the same way to use this code is not working. The only difference is the use of the Cardboard plugin.
Is there any specific Cardboard object I must use instead of GUI or another way to draw a texture?
As per the Google Cardboard docs, You need to have GUI elements exist in 3D space infront of the camera so they are replicated in each eye.
I'll share my solution of how I did it. Note that What I've done is have a single instance of the Cardboard Player Prefab spawn when my game starts and persist throughout all my levels via DontDestoryOnLoad(), rather than have a seperate instance in each level.
This allows for settings to be carried over to each loaded level and Fade out and Fade in the screen.
I accomplished a screen fader by creating a World Space Canvas that is parented to the Cardboard prefab's "Head" object so it follows gaze, And put a Black Sprite image that covers the entire Canvas which blocks the players view when the Black Sprite is visible.
This script attached to my Player Prefab allows me to first fade out the screen (call FadeOut()), Load a new level (set LevelToLoad to the level index you want to load), then Fade in the screen after the new level is loaded.
By default it uses the Async way of loading levels, To allow for loading Bars, But you can set UseAsync to false to load levels via Application.LoadLevel()
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class LoadOperations: MonoBehaviour {
public Image myImage;
// Use this for initialization
public bool UseAsync;
private AsyncOperation async = null;
public int LevelToLoad;
public float FadeoutTime;
public float fadeSpeed = 1.5f;
private bool fadeout;
private bool fadein;
public void FadeOut(){
fadein= false;
fadeout = true;
Debug.Log("Fading Out");
}
public void FadeIn(){
fadeout = false;
fadein = true;
Debug.Log("Fading In");
}
void Update(){
if(async != null){
Debug.Log(async.progress);
//When the Async is finished, the level is done loading, fade in the screen
if(async.progress >= 1.0){
async = null;
FadeIn();
}
}
//Fade Out the screen to black
if(fadeout){
myImage.color = Color.Lerp(myImage.color, Color.black, fadeSpeed * Time.deltaTime);
//Once the Black image is visible enough, Start loading the next level
if(myImage.color.a >= 0.999){
StartCoroutine("LoadALevel");
fadeout = false;
}
}
if(fadein){
myImage.color = Color.Lerp(myImage.color, new Color(0,0,0,0), fadeSpeed * Time.deltaTime);
if(myImage.color.a <= 0.01){
fadein = false;
}
}
}
public void LoadLevel(int index){
if(UseAsync){
LevelToLoad= index;
}else{
Application.LoadLevel(index);
}
}
public IEnumerator LoadALevel() {
async = Application.LoadLevelAsync(LevelToLoad);
yield return async;
}
}
The GUI, GUILayout and Graphics do not work in VR. No 2d direct to screen will work properly.
You should render in 3d, easiest thing to do is to put a sphere around the camera (or even better, two spheres around each eye) and animate their opacity.

Categories