Load and set Render Texture - c#

I have got CamFeed.rendertexture and CamFeed.mat files. Im loading them using my script and setting it up to the camera as target texture and material to another quad respectively but it just shows a black screen.
Material newMat = Resources.Load("CamFeed", typeof(Material)) as Material;
RenderTexture rendertexture = Resources.Load("CamFeed", typeof(Material)) as RenderTexture;
FirstPersonCamera.targetTexture = rendertexture;
quad.GetComponent<Renderer>().material = newMat;
How can I be able to sort it out?

Well, firstly, why are you converting a typeof(Material) into RenderTexture? You should have one of each in your Resources folder, named distinctively, so that you instantiate them the following way:
Material newMat = Resources.Load<Material>("CamFeedMaterial");
RenderTexture rendertexture = Resources.Load<RenderTexture>("CamFeedTexture");
Also make sure that your material uses a shader such as Unlit\Texture, since you are going to need to attach the RenderTexture to it, either in the Editor or from code:
newMat.mainTexture = rendertexture;
Finally, your last bit of code stays the same:
FirstPersonCamera.targetTexture = rendertexture;
quad.GetComponent<Renderer>().material = newMat;

Related

Importing .png file runtime Unity Android

I've got a problem during import a .png file to my project in code.
This is my .png file before importing:
After this code:
var pngImage = LoadPNG(pngPath);
string pngPath2 = Application.persistentDataPath + "/images/testImage.png";
var meshRenderer = GameObject.Find("SimInput").GetComponent<MeshRenderer>();
meshRenderer.material.mainTexture = pngImage;
public static Texture2D LoadPNG(string filePath)
{
Texture2D tex = null;
byte[] fileData;
if (File.Exists(filePath))
{
fileData = File.ReadAllBytes(filePath);
tex = new Texture2D(2, 2);
tex.LoadImage(fileData);
}
return tex;
}
I get the result as shown below:
Where am I doing a mistake?
I have been trying save this file again as a .png but the result was the same as on the first picture. Is there any property to change in Unity?
Thank you in advance.
This is a transparency issue. You are using the standard material which has it's "Rendering Mode" to set "Opaque". You have to set that to "Fade" or "Transparent". In this case "Fade", should work better. After this, you can then control the Metallic and Smoothness sliders to make it darker or lighter. You can also use another shader such as Sprites ---> Default, UI ---> Default or Unlit ---> Transparent and they should work without having to set anything else.
While this will solve your issue, if all you need to do is display the loaded Texture, use the RawImage component. This is is the appropriate way to display Texture2D. To create it, go to GameObject ---> UI ---> RawImage then use the simple code below to display it on the RawImage.
//Set a RawImage in the Inspector
public RawImage rawImage;
void Start()
{
Texture2D pngImage = LoadPNG(pngPath);
rawImage.texture = pngImage;
}

Unity - Can I make a mirror in 2D?

I am making a topdown tilebased 2D game. In this game, I want to make 1 of the walls into a mirror, like you can see on this video. Now, I know the game in the trailer is made in RPG Maker, but I want to make my game in Unity 3D.
I have tried to set a camera right next to the mirror, add a RenderTexture on the camera and set that texture on the Sprite, but of course it is not possible to convert a RenderTexture to a Sprite, so this did not end up working.
So my question is, is it possible to create a mirror like in the trailer?
It is possible to get that effect. Just parent the second camera to your character and make it move with your character.
It is possible to convert RenderTexture to a Sprite. First of all, convert the RenderTexture to Texture2D then convert the Texture2D to Sprite with the Sprite.Create function.
It is better to disable the second or mirror camera then use mirrorCam.Render() to manually render it only when you need to. The script below should get you started. Attach it to an empty GameObject then assign the mirror camera and the target SpriteRenderer from the Editor and it should mirror the what the camera is seeing to the SpriteRenderer. Don't forget to plugin RenderTexture to the mirror camera.
public class CameraToSpriteMirror: MonoBehaviour
{
public SpriteRenderer spriteToUpdate;
public Camera mirrorCam;
void Start()
{
StartCoroutine(waitForCam());
}
WaitForEndOfFrame endOfFrame = new WaitForEndOfFrame();
IEnumerator waitForCam()
{
//Will run forever in this while loop
while (true)
{
//Wait for end of frame
yield return endOfFrame;
//Get camera render texture
RenderTexture rendText = RenderTexture.active;
RenderTexture.active = mirrorCam.targetTexture;
//Render that camera
mirrorCam.Render();
//Convert to Texture2D
Texture2D text = renderTextureToTexture2D(mirrorCam.targetTexture);
RenderTexture.active = rendText;
//Convert to Sprite
Sprite sprite = texture2DToSprite(text);
//Apply to SpriteRenderer
spriteToUpdate.sprite = sprite;
}
}
Texture2D renderTextureToTexture2D(RenderTexture rTex)
{
Texture2D tex = new Texture2D(rTex.width, rTex.height, TextureFormat.RGB24, false);
tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
tex.Apply();
return tex;
}
Sprite texture2DToSprite(Texture2D text2D)
{
Sprite sprite = Sprite.Create(text2D, new Rect(0, 0, text2D.width, text2D.height), Vector2.zero);
return sprite;
}
}
You could do it the good old Super Mario 64 way and have the wall be a screen that shows another camera's perspective of another character.
Unity is pretty good at PIP (Picture In Picture) from what I've heard, so may be worth a shot.

Unity 5.6, What is the best approach to make "VR" video player (not 360 video!) [duplicate]

I want to play a stereo 360 degree video in virtual reality in Unity on an Android. So far I have been doing some research and I have two cameras for the right and left eye with each a sphere around them. I also need a custom shader to make the image render on the inside of the sphere. I have the upper half of the image showing on one sphere by setting the y-tiling to 0.5 and the lower half shows on the other sphere with y-tiling 0.5 and y-offset 0.5. With this I can show a 3D 360 degree image already correct. The whole idea is from this tutorial.
Now for video, I need control over the Video speed so it turned out I need the VideoPlayer from the new Unity 5.6 beta. Now my setup so far would require the Video Player to play the video on both spheres with one sphere playing the upper part (one eye) and the other video playing the lower part (other eye).
Here is my problem: I don't know how to get the video Player to play the same video on two different materials (since they have different tiling values). Is there a way to do that?
I got a hint that I could use the same material and achieve the tiling effect via UV, but I don't know how that works and I haven't even got the video player to play the video on two objects using the same material on both of them. I have a screenshot of that here. The Right sphere just has the material videoMaterial. No tiling since I'd have to do that via UV.
Which way to go and how to do it? Am I on the right way here?
Am I on the right way here?
Almost but you are currently using Renderer and Material instead of RenderTexture and Material.
Which way to go and how to do it?
You need to use RenderTexture for this. Basically, you render the Video to RenderTexture then you assign that Texture to the material of both Spheres.
1.Create a RenderTexture and assign it to the VideoPlayer.
2.Create two materials for the spheres.
3.Set VideoPlayer.renderMode to VideoRenderMode.RenderTexture;
4.Set the Texture of both Spheres to the Texture from the RenderTexture
5.Prepare and Play Video.
The code below is doing that exact thing. It should work out of the box. The only thing you need to do is to modify the tiling and offset of each material to your needs.
You should also comment out:
leftSphere = createSphere("LeftEye", new Vector3(-5f, 0f, 0f), new Vector3(4f, 4f, 4f));
rightSphere = createSphere("RightEye", new Vector3(5f, 0f, 0f), new Vector3(4f, 4f, 4f));
then use a Sphere imported from any 3D application. That line of code is only there for testing purposes and it's not a good idea to play video with Unity's sphere because the spheres don't have enough details to make the video smooth.
using UnityEngine;
using UnityEngine.Video;
public class StereoscopicVideoPlayer : MonoBehaviour
{
RenderTexture renderTexture;
Material leftSphereMat;
Material rightSphereMat;
public GameObject leftSphere;
public GameObject rightSphere;
private VideoPlayer videoPlayer;
//Audio
private AudioSource audioSource;
void Start()
{
//Create Render Texture
renderTexture = createRenderTexture();
//Create Left and Right Sphere Materials
leftSphereMat = createMaterial();
rightSphereMat = createMaterial();
//Create the Left and Right Sphere Spheres
leftSphere = createSphere("LeftEye", new Vector3(-5f, 0f, 0f), new Vector3(4f, 4f, 4f));
rightSphere = createSphere("RightEye", new Vector3(5f, 0f, 0f), new Vector3(4f, 4f, 4f));
//Assign material to the Spheres
leftSphere.GetComponent<MeshRenderer>().material = leftSphereMat;
rightSphere.GetComponent<MeshRenderer>().material = rightSphereMat;
//Add VideoPlayer to the GameObject
videoPlayer = gameObject.AddComponent<VideoPlayer>();
//Add AudioSource
audioSource = gameObject.AddComponent<AudioSource>();
//Disable Play on Awake for both Video and Audio
videoPlayer.playOnAwake = false;
audioSource.playOnAwake = false;
// We want to play from url
videoPlayer.source = VideoSource.Url;
videoPlayer.url = "http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4";
//Set Audio Output to AudioSource
videoPlayer.audioOutputMode = VideoAudioOutputMode.AudioSource;
//Assign the Audio from Video to AudioSource to be played
videoPlayer.EnableAudioTrack(0, true);
videoPlayer.SetTargetAudioSource(0, audioSource);
//Set the mode of output to be RenderTexture
videoPlayer.renderMode = VideoRenderMode.RenderTexture;
//Set the RenderTexture to store the images to
videoPlayer.targetTexture = renderTexture;
//Set the Texture of both Spheres to the Texture from the RenderTexture
assignTextureToSphere();
//Prepare Video to prevent Buffering
videoPlayer.Prepare();
//Subscribe to prepareCompleted event
videoPlayer.prepareCompleted += OnVideoPrepared;
}
RenderTexture createRenderTexture()
{
RenderTexture rd = new RenderTexture(1024, 1024, 16, RenderTextureFormat.ARGB32);
rd.Create();
return rd;
}
Material createMaterial()
{
return new Material(Shader.Find("Specular"));
}
void assignTextureToSphere()
{
//Set the Texture of both Spheres to the Texture from the RenderTexture
leftSphereMat.mainTexture = renderTexture;
rightSphereMat.mainTexture = renderTexture;
}
GameObject createSphere(string name, Vector3 spherePos, Vector3 sphereScale)
{
GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphere.transform.position = spherePos;
sphere.transform.localScale = sphereScale;
sphere.name = name;
return sphere;
}
void OnVideoPrepared(VideoPlayer source)
{
Debug.Log("Done Preparing Video");
//Play Video
videoPlayer.Play();
//Play Sound
audioSource.Play();
//Change Play Speed
if (videoPlayer.canSetPlaybackSpeed)
{
videoPlayer.playbackSpeed = 1f;
}
}
}
There is also Unity tutorial on how to do this with a special shader but this does not work for me and some other people. I suggest you use the method above until VR support is added to the VideoPlayer API.

How can stop my cameras outputting to the same render texture in Unity?

I have two renderer objects (A and B) in my scene connected to two different cameras (green square and red square):
I am using the following script on both render objects to create a render texure on the corresponding camera and then draw this as a texture on the object on each frame:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class CameraRenderer : MonoBehaviour
{
public Camera Camera;
public Renderer Renderer;
void Start()
{
RenderTexture renderTexture = new RenderTexture (256, 256, 16, RenderTextureFormat.ARGB32);
renderTexture.Create ();
Camera.targetTexture = renderTexture;
}
void Update ()
{
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();
}
Texture2D GetCameraTexture()
{
RenderTexture currentRenderTexture = RenderTexture.active;
RenderTexture.active = Camera.targetTexture;
Camera.Render();
Texture2D texture = new Texture2D(Camera.targetTexture.width, Camera.targetTexture.height);
texture.ReadPixels(new Rect(0, 0, Camera.targetTexture.width, Camera.targetTexture.height), 0, 0);
texture.Apply();
RenderTexture.active = currentRenderTexture;
return texture;
}
}
I am expecting to see two different images on A and B from the different cameras, but I am seeing the same image. I originally was using a render texture that I created in the editor attached to the camera, but though that might be what was causing them to render the same thing so I tried creating a new texture on each object. Sadly this still resulted in the same outcome.
I'm pretty new to unity so I've run out of ideas pretty fast - any suggestions would be great!
I wouldn't advise naming your objects your class names. Anyways I think the renderers are using the same material and they both render the same texture whichever camera gives them last.
Either use Renderer.material to automatically create an new instance of the material, or manually assign different materials to the 2 renderers.
Try,
Renderer.material.mainTexture = GetCameraTexture ();
Instead of,
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();

Monogame Shader Porting Issues

Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.

Categories