How to access raw data from RenderTexture in Unity - c#

Short Version of Problem
I am trying to access the contents of a RenderTexture in Unity which I have been drawing with an own Material using Graphics.Blit.
Graphics.Blit (null, renderTexture, material);
My material converts some yuv image to rgb successfully, which I have tested by assigning it to the texture of an UI element. The result is the correct RGB image visible on the screen.
However, I also need the raw data for a QR code scanner. I am doing this like I would access it from a camera as explained here. In a comment down there, it was mentioned that the extraction is also possible from a RenderTexture that was filled with Graphics.Blit. But when I am trying to that, my texture only contains the value 205 everywhere. This is the code I am using in the Update function, directly after the Graphics.Blit call:
RenderTexture.active = renderTexture;
texture.ReadPixels (new Rect (0, 0, width, height), 0, 0);
texture.Apply ();
RenderTexture.active = null;
When assigning this texture to the same UI element, it is gray and slightly transparent. When viewing the image values, they are all 205.
Why is the possible? May there be problems with the formats that the RenderTexture and Texture2D I am trying to fill?
Complete Code
In the following I add the whole code I am using. The names of the variables slightly differ to the ones I used above but they do essentially the same:
/**
* This class continously converts the y and uv textures in
* YUV color space to a RGB texture, which can be used somewhere else
*/
public class YUV2RGBConverter : MonoBehaviour {
public Material yuv2rgbMat;
// Input textures, set these when they are available
[HideInInspector]
public Texture2D yTex;
[HideInInspector]
public Texture2D uvTex;
// Output, the converted textures
[HideInInspector]
public RenderTexture rgbRenderTex;
[HideInInspector]
public Texture2D rgbTex;
[HideInInspector]
public Color32[] rawRgbData;
/// Describes how often per second the image should be transferred to the CPU
public float GPUTransferRate = 1.0f;
private float timeSinceLastGPUTransfer = 0.0f;
private int width;
private int height;
/**
* Initializes the used textures
*/
void Start () {
updateSize (width, height);
}
/**
* Depending on the sizes of the texture, creating the needed textures for this class
*/
public void updateSize(int width, int height)
{
// Generate the input textures
yTex = new Texture2D(width / 4, height, TextureFormat.RGBA32, false);
uvTex = new Texture2D ((width / 2) * 2 / 4, height / 2, TextureFormat.RGBA32, false);
// Generate the output texture
rgbRenderTex = new RenderTexture(width, height, 0);
rgbRenderTex.antiAliasing = 0;
rgbTex = new Texture2D (width, height, TextureFormat.RGBA32, false);
// Set to shader
yuv2rgbMat.SetFloat("_TexWidth", width);
yuv2rgbMat.SetFloat("_TexHeight", height);
}
/**
* Sets the y and uv textures to some dummy data
*/
public void fillYUWithDummyData()
{
// Set the y tex everywhere to time rest
float colorValue = (float)Time.time - (float)((int)Time.time);
for (int y = 0; y < yTex.height; y++) {
for (int x = 0; x < yTex.width; x++) {
Color yColor = new Color (colorValue, colorValue, colorValue, colorValue);
yTex.SetPixel (x, y, yColor);
}
}
yTex.Apply ();
// Set the uv tex colors
for (int y = 0; y < uvTex.height; y++) {
for (int x = 0; x < uvTex.width; x++) {
int firstXCoord = 2 * x;
int secondXCoord = 2 * x + 1;
int yCoord = y;
float firstXRatio = (float)firstXCoord / (2.0f * (float)uvTex.width);
float secondXRatio = (float)secondXCoord / (2.0f * (float)uvTex.width);
float yRatio = (float)y / (float)uvTex.height;
Color uvColor = new Color (firstXRatio, yRatio, secondXRatio, yRatio);
uvTex.SetPixel (x, y, uvColor);
}
}
uvTex.Apply ();
}
/**
* Continuously convert y and uv texture to rgb texture with custom yuv2rgb shader
*/
void Update () {
// Draw to it with the yuv2rgb shader
yuv2rgbMat.SetTexture ("_YTex", yTex);
yuv2rgbMat.SetTexture ("_UTex", uvTex);
Graphics.Blit (null, rgbRenderTex, yuv2rgbMat);
// Only scan once per second
if (timeSinceLastGPUTransfer > 1 / GPUTransferRate) {
timeSinceLastGPUTransfer = 0;
// Fetch its pixels and set it to rgb texture
RenderTexture.active = rgbRenderTex;
rgbTex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
rgbTex.Apply ();
RenderTexture.active = null;
rawRgbData = rgbTex.GetPixels32 ();
} else {
timeSinceLastGPUTransfer += Time.deltaTime;
}
}
}

Ok, sorry that I have to answer my question on my own. The solution is very easy:
The width and height property that I was using in this line:
rgbTex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
where not initialized, so they where 0.
I just had to add those lines to the updateSize function:
this.width = width;
this.height = height;

Related

Split Audio Waveform sprite that it width is out of range in a Scroll Rect

I'm new to Unity 3D and trying to split a texture2D sprite that contains an audio waveform in a Scroll Rect. The waveform comes from an audio source imported by the user and added to a scroll rect horizontally like a timeline. The script that creates the waveform works but the variable of the width (that came from another script, but this is not the problem) exceeds the limits of a Texture2D, only if I put manually a width less than 16000 the waveform appear but not to the maximum of the scroll rect. Usually, a song with 3-4min has a width of 55000-60000 width, and this can't be rendered. I need to split that waveform texture2D sprite horizontally into multiple parts (or Childs) together and render them only when appearing on the screen. How can I do that? Thank you in advance.
This creates the Waveform Sprite, and should split the sprite into multiple sprites and put together horizontally, render them only when appear on the screen):
public void LoadWaveform(AudioClip clip)
{
Texture2D texwav = waveformSprite.GetWaveform(clip);
Rect rect = new Rect(Vector2.zero, new Vector2(Realwidth, 180));
waveformImage.sprite = Sprite.Create(texwav, rect, Vector2.zero);
waveformImage.SetNativeSize();
}
This creates the waveform from an audio clip (getting from the internet and modifying for my project) :
public class WaveformSprite : MonoBehaviour
{
private int width = 16000; //This should be the variable from another script
private int height = 180;
public Color background = Color.black;
public Color foreground = Color.yellow;
private int samplesize;
private float[] samples = null;
private float[] waveform = null;
private float arrowoffsetx;
public Texture2D GetWaveform(AudioClip clip)
{
int halfheight = height / 2;
float heightscale = (float)height * 0.75f;
// get the sound data
Texture2D tex = new Texture2D(width, height, TextureFormat.RGBA32, false);
waveform = new float[width];
Debug.Log("NUMERO DE SAMPLES: " + clip.samples);
var clipSamples = clip.samples;
samplesize = clipSamples * clip.channels;
samples = new float[samplesize];
clip.GetData(samples, 0);
int packsize = (samplesize / width);
for (int w = 0; w < width; w++)
{
waveform[w] = Mathf.Abs(samples[w * packsize]);
}
// map the sound data to texture
// 1 - clear
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
tex.SetPixel(x, y, background);
}
}
// 2 - plot
for (int x = 0; x < width; x++)
{
for (int y = 0; y < waveform[x] * heightscale; y++)
{
tex.SetPixel(x, halfheight + y, foreground);
tex.SetPixel(x, halfheight - y, foreground);
}
}
tex.Apply();
return tex;
}
}
Instead of reading all the samples in one loop to populate waveform[], read only the amount needed for the current texture (utilizing an offset to track position in the array).
Calculate the number of textures your function will output.
var textureCount = Mathf.CeilToInt(totalWidth / maxTextureWidth); // max texture width 16,000
Create an outer loop to generate each texture.
for (int i = 0; i < textureCount; i++)
Calculate the current textures width (used for the waveform array and drawing loops).
var textureWidth = Mathf.CeilToInt(Mathf.Min(totalWidth - (maxTextureWidth * i), maxWidth));
Utilize an offset for populating the waveform array.
for (int w = 0; w < textureWidth; w++)
{
waveform[w] = Mathf.Abs(samples[(w + offset) * packSize]);
}
With offset increasing at the end of the textures loop by the number of samples used for that texture (ie texture width).
offset += textureWidth;
In the end the function will return an array of Texture2d instead of one.

Blend 2 Textures Unity C#

How can I blend two textures into a new one?
I have a texture from the android gallery and some logo png texture. I need to add this logo into the texture from the gallery and store this as variable to save into the gallery as a new image.
These shaders blend between two textures based on a 0-1 value that you control. The first version is extra-fast because it does not use lighting, and the second uses the same basic ambient + diffuse calculation that I used in my Simply Lit shader.
http://wiki.unity3d.com/index.php/Blend_2_Textures
Drag a different texture onto each of the material's variable slots, and use the Blend control to mix them to taste.
Take note that the lit version requires two passes on the GPU used in the oldest iOS devices.
ShaderLab - Blend 2 Textures.shader
Shader "Blend 2 Textures" {
Properties {
_Blend ("Blend", Range (0, 1) ) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
}
}
ShaderLab - Blend 2 Textures, Simply Lit.shader
Shader "Blend 2 Textures, Simply Lit" {
Properties {
_Color ("Color", Color) = (1,1,1)
_Blend ("Blend", Range (0,1)) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
Category {
Material {
Ambient[_Color]
Diffuse[_Color]
}
// iPhone 3GS and later
SubShader {Pass {
Lighting On
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
SetTexture[_] {Combine previous * primary Double}
}}
// pre-3GS devices, including the September 2009 8GB iPod touch
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
Pass {
Lighting On
Blend DstColor SrcColor
}
}
}
}
I had a similar task with a paint tool I was making. So here's my approach:
First, import or instantiate logo and picture textures as Texture2D in order to use Texture2D.GetPixel() and Texture2D.SetPixel() methods.
Assuming that logo size is smaller than picture itself, store logo pixels into the Color[] array:
Color[] logoPixels = logo.GetPixels();
We need to apply logo above the picture, considering alpha level in logo image itself:
//Method GetPixels stores pixel colors in 1D array
int i = 0; //Logo pixel index
for (int y = 0; y < picture.height; y++) {
for (int x = 0; x < picture.width; x++) {
//Get color of original pixel
Color c = picture.GetPixel (logoPositionX + x, logoPositionY + y);
//Lerp pixel color by alpha value
picture.SetPixel (logoPositionX + x, logoPositionY + y, Color.Lerp (c, logoPixels[i], logoPixels[i].a));
i++;
}
}
//Apply changes
picture.Apply();
So, if pixel's alpha = 0 we leave it without changes.
Get bytes of resulting image with picture.GetRawTextureData() and save it as png in a regular way. And to use SetPixel() and SetPixels() methods, make sure both, logo and picture it being applied to, are set Read/Write enabled in the import settings!
It's an old question but I have another solution:
public static Texture2D merge(params Texture2D[] textures) {
if (textures == null || textures.Length == 0)
return null;
int oldQuality = QualitySettings.GetQualityLevel();
QualitySettings.SetQualityLevel(5);
RenderTexture renderTex = RenderTexture.GetTemporary(
textures[0].width,
textures[0].height,
0,
RenderTextureFormat.Default,
RenderTextureReadWrite.Linear);
Graphics.Blit(textures[0], renderTex);
RenderTexture previous = RenderTexture.active;
RenderTexture.active = renderTex;
GL.PushMatrix();
GL.LoadPixelMatrix(0, textures[0].width, textures[0].height, 0);
for (int i = 1; i < textures.Length; i++)
Graphics.DrawTexture(new Rect(0, 0, textures[0].width, textures[0].height), textures[i]);
GL.PopMatrix();
Texture2D readableText = new Texture2D(textures[0].width, textures[0].height);
readableText.ReadPixels(new Rect(0, 0, renderTex.width, renderTex.height), 0, 0);
readableText.Apply();
RenderTexture.active = previous;
RenderTexture.ReleaseTemporary(renderTex);
QualitySettings.SetQualityLevel(oldQuality);
return readableText;
}
And here is the use:
Texture2D coloredTex = ImageUtils.merge(tex,
sprites[0].texture,
sprites[1].texture,
sprites[2].texture,
sprites[3].texture);
Hope it helps
I made this solution, It works with two texture2d in Unity.
public Texture2D ImageBlend(Texture2D Bottom, Texture2D Top)
{
var bData = Bottom.GetPixels();
var tData = Top.GetPixels();
int count = bData.Length;
var final = new Color[count];
int i = 0;
int iT = 0;
int startPos = (Bottom.width / 2) - (Top.width / 2) -1;
int endPos = Bottom.width - startPos -1;
for (int y = 0; y < Bottom.height; y++)
{
for (int x = 0; x < Bottom.width; x++)
{
if (y > startPos && y < endPos && x > startPos && x < endPos)
{
Color B = bData[i];
Color T = tData[iT];
Color R;
R = new Color((T.a * T.r) + ((1-T.a) * B.r),
(T.a * T.g) + ((1 - T.a) * B.g),
(T.a * T.b) + ((1 - T.a) * B.b), 1.0f);
final[i] = R;
i++;
iT++;
}
else
{
final[i] = bData[i];
i++;
}
}
}
var res = new Texture2D(Bottom.width, Bottom.height);
res.SetPixels(final);
res.Apply();
return res;
}

Apply texture in a quad mesh from a texture atlas

I'm trying to apply dynamically a texture from a texture atlas to a quad mesh in Unity3D.
When I do the same in a mesh of a cube, the front face works very fine but the other ones get distorted. So I had the idea to use a simple quad and now I'm facing this scenario:
The image should be displayed like this:
I'm placing the texture by the code below. The math is working fine:
public int offsetX = 0;
public int offsetY = 0;
private const float offset = 0.0625f; // the rate of each texture square
void Start ()
{
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector2[] UVs = new Vector2[mesh.vertices.Length];
UVs[0] = new Vector2(offsetX * offset, offsetY * offset);
UVs[1] = new Vector2((offsetX * offset) + offset, offsetY * offset);
UVs[2] = new Vector2(offsetX * offset, (offsetY * offset) + offset);
UVs[3] = new Vector2((offsetX * offset) + offset, (offsetY * offset) + offset);
mesh.uv = UVs;
}
What should I do to place the texture in the quad mesh as the image reference?
For those looking for an answer:
I've fixed that changing the tail (offset) and scale of the shader. Example:
using UnityEngine;
public class Cube : MonoBehaviour {
public int offsetX = 0;
public int offsetY = 0;
private Renderer _rend;
private Material _material;
private const float Offset = 0.0625f;
// Use this for initialization
private void Start ()
{
_rend = GetComponent<Renderer>();
_material = _rend.material;
_material.mainTextureScale = new Vector2(Offset,Offset);
}
private void Update()
{
_material.mainTextureOffset = new Vector2(offsetX*Offset,offsetY*Offset);
}
}

Unity watermark on image after screenshot

I am trying to add a watermark on my image, and this is the code I have for taking a screenshot. Can someone teach me how to implement watermark into my image? I want a small logo at the top right hand side of the image.
I am trying to research on maybe if I could implement what I have in the canvas to stay when a screenshot is taken ( real life ). But to no luck. Would really appreciate if someone could help me out here !
public string MakePhoto(bool openIt)
{
int resWidth = Screen.width;
int resHeight = Screen.height;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false); //Create new texture
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
// hide the info-text, if any
if (infoText)
{
infoText.text = string.Empty;
}
// render background and foreground cameras
if (backroundCamera && backroundCamera.enabled)
{
backroundCamera.targetTexture = rt;
backroundCamera.Render();
backroundCamera.targetTexture = null;
}
if (backroundCamera2 && backroundCamera2.enabled)
{
backroundCamera2.targetTexture = rt;
backroundCamera2.Render();
backroundCamera2.targetTexture = null;
}
if (foreroundCamera && foreroundCamera.enabled)
{
foreroundCamera.targetTexture = rt;
foreroundCamera.Render();
foreroundCamera.targetTexture = null;
}
// get the screenshot
RenderTexture prevActiveTex = RenderTexture.active;
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
// clean-up
RenderTexture.active = prevActiveTex;
Destroy(rt);
byte[] btScreenShot = screenShot.EncodeToJPG();
Destroy(screenShot);
#if !UNITY_WSA
// save the screenshot as jpeg file
string sDirName = Application.persistentDataPath + "/Screenshots";
if (!Directory.Exists(sDirName))
Directory.CreateDirectory (sDirName);
string sFileName = sDirName + "/" + string.Format ("{0:F0}", Time.realtimeSinceStartup * 10f) + ".jpg";
File.WriteAllBytes(sFileName, btScreenShot);
Debug.Log("Photo saved to: " + sFileName);
if (infoText)
{
infoText.text = "Saved to: " + sFileName;
}
// open file
if(openIt)
{
System.Diagnostics.Process.Start(sFileName);
}
return sFileName;
PS: I found this which might be useful?
public Texture2D AddWatermark(Texture2D background, Texture2D watermark)
{
int startX = 0;
int startY = background.height - watermark.height;
for (int x = startX; x < background.width; x++)
{
for (int y = startY; y < background.height; y++)
{
Color bgColor = background.GetPixel(x, y);
Color wmColor = watermark.GetPixel(x - startX, y - startY);
Color final_color = Color.Lerp(bgColor, wmColor, wmColor.a / 1.0f);
background.SetPixel(x, y, final_color);
}
}
background.Apply();
return background;
}
Select the imported image in the ProjectsView and in the inspector set TextureType to Sprite (2D and UI) (see Sprites Manual) and hit Apply
add a Sprite field for it to your class like
public Texture2D watermark;
Reference the watermark in the Inspector
You could simply add the watermark as overlay by adding the Color values from both textures for each pixel (assuming here they have the same size!)
If you want a watermark only in a certain rect of the texture you either have to scale it accordingly and use Texture2D.SetPixels(int x, int y, int blockWidth, int blockHeight, Color[] colors) (This assumes the watermark image is smaller in pixels than the screenShot!) like
private static void AddWaterMark(Texture2D texture, Texture2D watermarkTexture)
{
int watermarkWidth = watermarkTexture.width;
int watermarkHeight = watermarkTexture.height;
// In Unity differrent to most expectations the pixel corrdinate
// 0,0 is not the top-left corner but instead the bottom-left
// so since you want the whatermark in the top-right corner do
int startx = texture.width - watermarkWidth;
// optionally you could also still leave a border of e.g. 10 pixels by using
// int startx = texture.width - watermarkWidth - 10;
// same for the y axis
int starty = texture.height - watermarkHeight;
Color[] watermarkPixels = watermarkTexture.GetPixels();
// get the texture pixels for the given rect
Color[] originalPixels = texture.GetPixels(startx, starty, watermarkWidth, watermarkHeight);
for(int i = 0; i < watermarkPixels.Length; i++)
{
var pixel = watermarkPixels[i];
// adjust the alpha value of the whatermark
pixel.a *= 0.5f;
// add watermark pixel to original pixel
originalPixels[i] += pixel;
}
// write back the changed texture data
texture.SetPixels(startx, starty, watermarkWidth, watermarkHeight, originalPixels);
texture.Apply();
}
call it like
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
AddWaterMark(screenShot, watermark);

Draw simple circle in XNA

I want to draw a 2d, filled, circle. I've looked everywhere and cannot seem to find anything that will even remotely help me draw a circle. I simply want to specify a height and width and location on my canvas.
Anyone know how?
Thanks!
XNA doesn't normally have an idea of a canvas you can paint on. Instead you can either create a circle in your favorite paint program and render it as a sprite or create a series vertexes in a 3D mesh to approximate a circle and render that.
You could also check out the sample framework that Jeff Weber uses in Farseer:
http://www.codeplex.com/FarseerPhysics
The demos have a dynamic texture generator that let's him make circles and rectangles (which the samples then use as the visualization of the physics simulation). You could just re-use that :-)
Had the same problem, as others already suggested you need to draw a square or rectangle with a circle texture on it. Here follows my method to create a circle texture runtime. Not the most efficient or fancy way to do it, but it works.
Texture2D createCircleText(int radius)
{
Texture2D texture = new Texture2D(GraphicsDevice, radius, radius);
Color[] colorData = new Color[radius*radius];
float diam = radius / 2f;
float diamsq = diam * diam;
for (int x = 0; x < radius; x++)
{
for (int y = 0; y < radius; y++)
{
int index = x * radius + y;
Vector2 pos = new Vector2(x - diam, y - diam);
if (pos.LengthSquared() <= diamsq)
{
colorData[index] = Color.White;
}
else
{
colorData[index] = Color.Transparent;
}
}
}
texture.SetData(colorData);
return texture;
}
Out of the box, there's no support for this in XNA. I'm assuming you're coming from some GDI background and just want to see something moving around onscreen. In a real game though, this is seldom if ever needed.
There's some helpful info here:
http://forums.xna.com/forums/t/7414.aspx
My advice to you would be to just fire up paint or something, and create the basic shapes yourself and use the Content Pipeline.
Another option (if you want to use a more complex gradient brush or something) is to draw a quad aligned to the screen and use a pixel shader.
What I did to solve this was to paint a rectangular texture, leaving the area of the rectangle which doesn't contain the circle transparent. You check to see if a point in the array is contained within a circle originating from the center of the rectangle.
Using the color data array is a bit weird because its not a 2D array. My solution was to bring in some 2D array logic into the scenario.
public Texture2D GetColoredCircle(float radius, Color desiredColor)
{
radius = radius / 2;
int width = (int)radius * 2;
int height = width;
Vector2 center = new Vector2(radius, radius);
Circle circle = new Circle(center, radius,false);
Color[] dataColors = new Color[width * height];
int row = -1; //increased on first iteration to zero!
int column = 0;
for (int i = 0; i < dataColors.Length; i++)
{
column++;
if(i % width == 0) //if we reach the right side of the rectangle go to the next row as if we were using a 2D array.
{
row++;
column = 0;
}
Vector2 point = new Vector2(row, column); //basically the next pixel.
if(circle.ContainsPoint(point))
{
dataColors[i] = desiredColor; //point lies within the radius. Paint it.
}
else
{
dataColors[i] = Color.Transparent; //point lies outside, leave it transparent.
}
}
Texture2D texture = new Texture2D(GraphicsDevice, width, height);
texture.SetData(0, new Rectangle(0, 0, width, height), dataColors, 0, width * height);
return texture;
}
And here's the method to check whether or not a point is contained within your circle:
public bool ContainsPoint(Vector2 point)
{
return ((point - this.Center).Length() <= this.Radius);
}
Hope this helps!
public Texture2D createCircleText(int radius, GraphicsDevice Devise,Color color,int tickenes)
{
Texture2D texture = new Texture2D(Devise, radius, radius);
Color[] colorData = new Color[radius * radius];
if (tickenes >= radius) tickenes = radius - 5;
float diam = radius / 2f;
float diamsq = diam * diam;
float intdiam = (radius-tickenes) / 2f;
float intdiamsq = intdiam * intdiam;
for (int x = 0; x < radius; x++)
{
for (int y = 0; y < radius; y++)
{
int index = x * radius + y;
Vector2 pos = new Vector2(x - diam, y - diam);
if (pos.LengthSquared() <= diamsq)
{
colorData[index] = color;
}
else
{
colorData[index] = Color.Transparent;
}
if (pos.LengthSquared() <= intdiamsq)
{
colorData[index] = Color.Transparent;
}
}
}
texture.SetData(colorData);
return texture;
}

Categories