I am making a topdown tilebased 2D game. In this game, I want to make 1 of the walls into a mirror, like you can see on this video. Now, I know the game in the trailer is made in RPG Maker, but I want to make my game in Unity 3D.
I have tried to set a camera right next to the mirror, add a RenderTexture on the camera and set that texture on the Sprite, but of course it is not possible to convert a RenderTexture to a Sprite, so this did not end up working.
So my question is, is it possible to create a mirror like in the trailer?
It is possible to get that effect. Just parent the second camera to your character and make it move with your character.
It is possible to convert RenderTexture to a Sprite. First of all, convert the RenderTexture to Texture2D then convert the Texture2D to Sprite with the Sprite.Create function.
It is better to disable the second or mirror camera then use mirrorCam.Render() to manually render it only when you need to. The script below should get you started. Attach it to an empty GameObject then assign the mirror camera and the target SpriteRenderer from the Editor and it should mirror the what the camera is seeing to the SpriteRenderer. Don't forget to plugin RenderTexture to the mirror camera.
public class CameraToSpriteMirror: MonoBehaviour
{
public SpriteRenderer spriteToUpdate;
public Camera mirrorCam;
void Start()
{
StartCoroutine(waitForCam());
}
WaitForEndOfFrame endOfFrame = new WaitForEndOfFrame();
IEnumerator waitForCam()
{
//Will run forever in this while loop
while (true)
{
//Wait for end of frame
yield return endOfFrame;
//Get camera render texture
RenderTexture rendText = RenderTexture.active;
RenderTexture.active = mirrorCam.targetTexture;
//Render that camera
mirrorCam.Render();
//Convert to Texture2D
Texture2D text = renderTextureToTexture2D(mirrorCam.targetTexture);
RenderTexture.active = rendText;
//Convert to Sprite
Sprite sprite = texture2DToSprite(text);
//Apply to SpriteRenderer
spriteToUpdate.sprite = sprite;
}
}
Texture2D renderTextureToTexture2D(RenderTexture rTex)
{
Texture2D tex = new Texture2D(rTex.width, rTex.height, TextureFormat.RGB24, false);
tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
tex.Apply();
return tex;
}
Sprite texture2DToSprite(Texture2D text2D)
{
Sprite sprite = Sprite.Create(text2D, new Rect(0, 0, text2D.width, text2D.height), Vector2.zero);
return sprite;
}
}
You could do it the good old Super Mario 64 way and have the wall be a screen that shows another camera's perspective of another character.
Unity is pretty good at PIP (Picture In Picture) from what I've heard, so may be worth a shot.
Related
Is there a way in Unity to set a Script as an AudioSource?
I have a Script which produces Sound, i now want to Visualize this, but with GetComponent i only get a null value.
I have tried with GetComponent but for that i need to set the Script as the Source, and it already worked with AudioListeners but i need the AudioSource to also visualize it. I have used several online tutorials, but no one uses a Script as Source.
I can only see one solution left, which i want to avoid ,which would be writing the data from the AudioSource in a file, and generating the Visualization that way.
Edit: I have tried what DareerAhmadMufti suggested:
Sadly it still doesn't work.
public class AudioGaudioGenerator : MonoBehaviour
{
private Graph graph;
public AudioSource _audioSource;
float[] farr = new float[4096];
void Start()
{
graph = this.GetComponentInChildren<Graph>();
_audioSource = GetComponent<AudioSource>();
}
// Update is called once per frame
void Update()
{
GetSpectrumAudioSource();
if (graph.showWindow0)
{
graph.SetValues(farr);
}
}
void GetSpectrumAudioSource()
{
_audioSource.GetOutputData(farr, 0);
}
This is the Script for the AudioGenerator
You want to use a script for audio visualization, but you only get a null value using the "GetComponent" method. You can create an empty object in the scene with position coordinates (0,0,0) and rename it Audio Visualization, and add LineRenderer and AudioSource components to it.Create a red shader and assign it to LineRenderer.
Create a cube, then create a green shader and assign it to the cube, drag it into a prefab cube.
write scripts:
public class AudioVisualization : MonoBehaviour
{
AudioSource audio;//Sound source
float[] samples = new float[128];//The length of the array to store the spectrum data
LineRenderer linerenderer;//Draw line
public GameObject cube;//cube prefab
Transform[] cubeTransform;//Position of cube prefab
Vector3 cubePos;//The middle position, used to compare the cube position with the spectral data of this frame
// Use this for initialization
void Start()
{
GameObject tempCube;
audio = GetComponent<AudioSource>();//Get the sound source component
linerenderer = GetComponent<LineRenderer>();//Get the line drawing component
linerenderer.positionCount = samples.Length;//Set the number of segments of the line segment
cubeTransform = new Transform[samples.Length];//Set the length of the array
//Move the gameobject mounted by the script to the left, so that the center of the generated object is facing the camera
transform.position = new Vector3(-samples.Length * 0.5f, transform.position.y, transform.position.z);
//Generate the cube, pass its position information into the cubeTransform array, and set it as the child object of the gameobject mounted by the script
for (int i = 0; i < samples.Length; i++)
{
tempCube=Instantiate(cube,new Vector3(transform.position.x+i,transform.position.y,transform.position.z),Quaternion.identity);
cubeTransform[i] = tempCube.transform;
cubeTransform[i].parent = transform;
}
}
// Update is called once per frame
void Update()
{
//get spectrum
audio.GetSpectrumData(samples, 0, FFTWindow.BlackmanHarris);
//cycle
for (int i = 0; i < samples.Length; i++)
{
//Set the y value of the middle position according to the spectrum data, and set the x and z values according to the position of the corresponding cubeTransform
//Use Mathf.Clamp to limit the y of the middle position to a certain range to avoid being too large
//The more backward the spectrum is, the smaller
cubePos.Set(cubeTransform[i].position.x, Mathf.Clamp(samples[i] * ( 50+i * i*0.5f), 0, 100), cubeTransform[i].position.z);
//Draw a line, in order to prevent the line from overlapping with the cube, the height is reduced by one
linerenderer.SetPosition(i, cubePos-Vector3.up);
//When the y value of the cube is less than the y value of the intermediate position cubePos, the position of the cube becomes the position of the cubePos
if (cubeTransform[i].position.y < cubePos.y)
{
cubeTransform[i].position = cubePos;
}
//When the y value of the cube is greater than the y value of the cubePos in the middle position, the position of the cube slowly falls down
else if (cubeTransform[i].position.y > cubePos.y)
{
cubeTransform[i].position -= new Vector3(0, 0.5f, 0);
}
}
}
}
Hook the script on Audio Visualization and assign the prefab cube to the script. Import an audio resource and assign the audio resource to the audiosouce component.
The playback effect is as follows
You cannot use a script as an AudioSource. There is already an AudioSource script. If an object contains the AudioSource script, then you are able to access with GetComponent<AudioSource>(). But it ust already be on the object.
Your question is quite ambiguous. you can't use your custom script as AudioSource, as it is a sealed class.
If you want to get access data of the audioClip or audioSource, add AudioSource component to same object in which your script is attached. like:
make a public AudioSource variable in your script and assign the AudioSource component, as shown above in screenshot.
In this way you will be able to access the data you want in you custom scripts via that variable.
I'm having this problem with sprites in my Unity3D project. Basiclly I can't change sprite in SpriteRenderer component during runtime. During my research I've only seen solutions that require you to have the sprite pre-loaded, but I can't because it's generated based on the users input image.
So what happens is the user can change the background of the "game" by inputing his own photo from his computer. I get the photo in and everything and I generate a sprite from it, but when I change the base sprite with his sprite, nothing happens. The base sprite is still showing. If I put the background sprite on a panel's Image in canvas, then everything works great, but if I do the same with SpriteRenderer then nothing happens. Here is my code:
public class UploadImage : MonoBehaviour
{
public GameObject background;
public Sprite sp;
[DllImport("__Internal")]
private static extern void ImageUploaderCaptureClick();
public void setTexture(Texture2D texture)
{
sp = Sprite.Create(texture, new Rect(0, 0, texture.width, texture.height), new Vector2(texture.width / 2, texture.height / 2));
background.GetComponent<SpriteRenderer>().sprite = sp;
}
IEnumerator LoadTexture(string url)
{
WWW image = new WWW(url);
yield return image;
Texture2D texture = new Texture2D(1, 1);
image.LoadImageIntoTexture(texture);
Debug.Log("Loaded image size: " + texture.width + "x" + texture.height);
setTexture(texture);
}
void FileSelected(string url)
{
StartCoroutine(LoadTexture(url));
}
public void OnButtonPointerDown()
{
#if UNITY_EDITOR
string path = UnityEditor.EditorUtility.OpenFilePanel("Open image", "", "jpg,png,bmp");
if (!System.String.IsNullOrEmpty(path))
FileSelected("file:///" + path);
#else
ImageUploaderCaptureClick ();
#endif
}
}
I can't have the background on an image in canvas because other game objects lose transparency and if I set the alpha on image too low, then when game objects move, it leaves everything blurry.
Thanks for your help
I think you are setting the sprites pivot wrong when generating the sprite. Your sprite should even be displayed at the moment but its far away from where you expect it to be.
Change your code to something like this:
sp = Sprite.Create(texture, new Rect(0, 0, texture.width, texture.height), new Vector2(0.5f, 0.5f));
I want to play a stereo 360 degree video in virtual reality in Unity on an Android. So far I have been doing some research and I have two cameras for the right and left eye with each a sphere around them. I also need a custom shader to make the image render on the inside of the sphere. I have the upper half of the image showing on one sphere by setting the y-tiling to 0.5 and the lower half shows on the other sphere with y-tiling 0.5 and y-offset 0.5. With this I can show a 3D 360 degree image already correct. The whole idea is from this tutorial.
Now for video, I need control over the Video speed so it turned out I need the VideoPlayer from the new Unity 5.6 beta. Now my setup so far would require the Video Player to play the video on both spheres with one sphere playing the upper part (one eye) and the other video playing the lower part (other eye).
Here is my problem: I don't know how to get the video Player to play the same video on two different materials (since they have different tiling values). Is there a way to do that?
I got a hint that I could use the same material and achieve the tiling effect via UV, but I don't know how that works and I haven't even got the video player to play the video on two objects using the same material on both of them. I have a screenshot of that here. The Right sphere just has the material videoMaterial. No tiling since I'd have to do that via UV.
Which way to go and how to do it? Am I on the right way here?
Am I on the right way here?
Almost but you are currently using Renderer and Material instead of RenderTexture and Material.
Which way to go and how to do it?
You need to use RenderTexture for this. Basically, you render the Video to RenderTexture then you assign that Texture to the material of both Spheres.
1.Create a RenderTexture and assign it to the VideoPlayer.
2.Create two materials for the spheres.
3.Set VideoPlayer.renderMode to VideoRenderMode.RenderTexture;
4.Set the Texture of both Spheres to the Texture from the RenderTexture
5.Prepare and Play Video.
The code below is doing that exact thing. It should work out of the box. The only thing you need to do is to modify the tiling and offset of each material to your needs.
You should also comment out:
leftSphere = createSphere("LeftEye", new Vector3(-5f, 0f, 0f), new Vector3(4f, 4f, 4f));
rightSphere = createSphere("RightEye", new Vector3(5f, 0f, 0f), new Vector3(4f, 4f, 4f));
then use a Sphere imported from any 3D application. That line of code is only there for testing purposes and it's not a good idea to play video with Unity's sphere because the spheres don't have enough details to make the video smooth.
using UnityEngine;
using UnityEngine.Video;
public class StereoscopicVideoPlayer : MonoBehaviour
{
RenderTexture renderTexture;
Material leftSphereMat;
Material rightSphereMat;
public GameObject leftSphere;
public GameObject rightSphere;
private VideoPlayer videoPlayer;
//Audio
private AudioSource audioSource;
void Start()
{
//Create Render Texture
renderTexture = createRenderTexture();
//Create Left and Right Sphere Materials
leftSphereMat = createMaterial();
rightSphereMat = createMaterial();
//Create the Left and Right Sphere Spheres
leftSphere = createSphere("LeftEye", new Vector3(-5f, 0f, 0f), new Vector3(4f, 4f, 4f));
rightSphere = createSphere("RightEye", new Vector3(5f, 0f, 0f), new Vector3(4f, 4f, 4f));
//Assign material to the Spheres
leftSphere.GetComponent<MeshRenderer>().material = leftSphereMat;
rightSphere.GetComponent<MeshRenderer>().material = rightSphereMat;
//Add VideoPlayer to the GameObject
videoPlayer = gameObject.AddComponent<VideoPlayer>();
//Add AudioSource
audioSource = gameObject.AddComponent<AudioSource>();
//Disable Play on Awake for both Video and Audio
videoPlayer.playOnAwake = false;
audioSource.playOnAwake = false;
// We want to play from url
videoPlayer.source = VideoSource.Url;
videoPlayer.url = "http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4";
//Set Audio Output to AudioSource
videoPlayer.audioOutputMode = VideoAudioOutputMode.AudioSource;
//Assign the Audio from Video to AudioSource to be played
videoPlayer.EnableAudioTrack(0, true);
videoPlayer.SetTargetAudioSource(0, audioSource);
//Set the mode of output to be RenderTexture
videoPlayer.renderMode = VideoRenderMode.RenderTexture;
//Set the RenderTexture to store the images to
videoPlayer.targetTexture = renderTexture;
//Set the Texture of both Spheres to the Texture from the RenderTexture
assignTextureToSphere();
//Prepare Video to prevent Buffering
videoPlayer.Prepare();
//Subscribe to prepareCompleted event
videoPlayer.prepareCompleted += OnVideoPrepared;
}
RenderTexture createRenderTexture()
{
RenderTexture rd = new RenderTexture(1024, 1024, 16, RenderTextureFormat.ARGB32);
rd.Create();
return rd;
}
Material createMaterial()
{
return new Material(Shader.Find("Specular"));
}
void assignTextureToSphere()
{
//Set the Texture of both Spheres to the Texture from the RenderTexture
leftSphereMat.mainTexture = renderTexture;
rightSphereMat.mainTexture = renderTexture;
}
GameObject createSphere(string name, Vector3 spherePos, Vector3 sphereScale)
{
GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphere.transform.position = spherePos;
sphere.transform.localScale = sphereScale;
sphere.name = name;
return sphere;
}
void OnVideoPrepared(VideoPlayer source)
{
Debug.Log("Done Preparing Video");
//Play Video
videoPlayer.Play();
//Play Sound
audioSource.Play();
//Change Play Speed
if (videoPlayer.canSetPlaybackSpeed)
{
videoPlayer.playbackSpeed = 1f;
}
}
}
There is also Unity tutorial on how to do this with a special shader but this does not work for me and some other people. I suggest you use the method above until VR support is added to the VideoPlayer API.
I have two renderer objects (A and B) in my scene connected to two different cameras (green square and red square):
I am using the following script on both render objects to create a render texure on the corresponding camera and then draw this as a texture on the object on each frame:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class CameraRenderer : MonoBehaviour
{
public Camera Camera;
public Renderer Renderer;
void Start()
{
RenderTexture renderTexture = new RenderTexture (256, 256, 16, RenderTextureFormat.ARGB32);
renderTexture.Create ();
Camera.targetTexture = renderTexture;
}
void Update ()
{
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();
}
Texture2D GetCameraTexture()
{
RenderTexture currentRenderTexture = RenderTexture.active;
RenderTexture.active = Camera.targetTexture;
Camera.Render();
Texture2D texture = new Texture2D(Camera.targetTexture.width, Camera.targetTexture.height);
texture.ReadPixels(new Rect(0, 0, Camera.targetTexture.width, Camera.targetTexture.height), 0, 0);
texture.Apply();
RenderTexture.active = currentRenderTexture;
return texture;
}
}
I am expecting to see two different images on A and B from the different cameras, but I am seeing the same image. I originally was using a render texture that I created in the editor attached to the camera, but though that might be what was causing them to render the same thing so I tried creating a new texture on each object. Sadly this still resulted in the same outcome.
I'm pretty new to unity so I've run out of ideas pretty fast - any suggestions would be great!
I wouldn't advise naming your objects your class names. Anyways I think the renderers are using the same material and they both render the same texture whichever camera gives them last.
Either use Renderer.material to automatically create an new instance of the material, or manually assign different materials to the 2 renderers.
Try,
Renderer.material.mainTexture = GetCameraTexture ();
Instead of,
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();
I'm developing an application in Unity with the Google CardbBoard Plugin, and I tried to fade in/out the screen when passing between scenes, I've worked with this example drawing a texture in the GUI object:
GUI.color = new Color (GUI.color.r, GUI.color.g, GUI.color.b, alpha);
Texture2D myTex;
myTex = new Texture2D (1, 1);
myTex.SetPixel (0, 0, fadeColor);
myTex.Apply ();
GUI.DrawTexture (new Rect (0, 0, Screen.width, Screen.height), myTex);
if (isFadeIn)
alpha = Mathf.Lerp (alpha, -0.1f, fadeDamp * Time.deltaTime);
else
alpha = Mathf.Lerp (alpha, 1.1f, fadeDamp * Time.deltaTime);
if (alpha >= 1 && !isFadeIn) {
Application.LoadLevel (fadeScene);
DontDestroyOnLoad(gameObject);
} else if (alpha <= 0 && isFadeIn) {
Destroy(gameObject);
}
The code I worked with is from this page: Video Tutorial, Example downloads, and it worked fine in a Unity game without the Cardboard plugin, but in my current project the same way to use this code is not working. The only difference is the use of the Cardboard plugin.
Is there any specific Cardboard object I must use instead of GUI or another way to draw a texture?
As per the Google Cardboard docs, You need to have GUI elements exist in 3D space infront of the camera so they are replicated in each eye.
I'll share my solution of how I did it. Note that What I've done is have a single instance of the Cardboard Player Prefab spawn when my game starts and persist throughout all my levels via DontDestoryOnLoad(), rather than have a seperate instance in each level.
This allows for settings to be carried over to each loaded level and Fade out and Fade in the screen.
I accomplished a screen fader by creating a World Space Canvas that is parented to the Cardboard prefab's "Head" object so it follows gaze, And put a Black Sprite image that covers the entire Canvas which blocks the players view when the Black Sprite is visible.
This script attached to my Player Prefab allows me to first fade out the screen (call FadeOut()), Load a new level (set LevelToLoad to the level index you want to load), then Fade in the screen after the new level is loaded.
By default it uses the Async way of loading levels, To allow for loading Bars, But you can set UseAsync to false to load levels via Application.LoadLevel()
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class LoadOperations: MonoBehaviour {
public Image myImage;
// Use this for initialization
public bool UseAsync;
private AsyncOperation async = null;
public int LevelToLoad;
public float FadeoutTime;
public float fadeSpeed = 1.5f;
private bool fadeout;
private bool fadein;
public void FadeOut(){
fadein= false;
fadeout = true;
Debug.Log("Fading Out");
}
public void FadeIn(){
fadeout = false;
fadein = true;
Debug.Log("Fading In");
}
void Update(){
if(async != null){
Debug.Log(async.progress);
//When the Async is finished, the level is done loading, fade in the screen
if(async.progress >= 1.0){
async = null;
FadeIn();
}
}
//Fade Out the screen to black
if(fadeout){
myImage.color = Color.Lerp(myImage.color, Color.black, fadeSpeed * Time.deltaTime);
//Once the Black image is visible enough, Start loading the next level
if(myImage.color.a >= 0.999){
StartCoroutine("LoadALevel");
fadeout = false;
}
}
if(fadein){
myImage.color = Color.Lerp(myImage.color, new Color(0,0,0,0), fadeSpeed * Time.deltaTime);
if(myImage.color.a <= 0.01){
fadein = false;
}
}
}
public void LoadLevel(int index){
if(UseAsync){
LevelToLoad= index;
}else{
Application.LoadLevel(index);
}
}
public IEnumerator LoadALevel() {
async = Application.LoadLevelAsync(LevelToLoad);
yield return async;
}
}
The GUI, GUILayout and Graphics do not work in VR. No 2d direct to screen will work properly.
You should render in 3d, easiest thing to do is to put a sphere around the camera (or even better, two spheres around each eye) and animate their opacity.