Saving Target Texture as an image - c#

I have got a quad that has got a target texture set through Unity Editor. I would like to save the output visualization on the quad as an image. Are there any ways to save it as a quad? I have tried through texture2d but its just a black image that is being saved.

Try this
private void SaveImage(Texture t, string path)
{
RenderTexture rt = new RenderTexture(t.width, t.height, 0);
Graphics.Blit(t, rt);
Texture2D t2d = new Texture2D(rt.width, rt.height, TextureFormat.RGB24, false);
t2d.ReadPixels(new Rect(0, 0, rt.width, rt.height), 0, 0);
File.WriteAllBytes(path, t2d.EncodeToPNG());
}
Usage
SaveImage(yourQuad.GetComponent<MeshRenderer>().material.mainTexture, "yourSavePath.png");

Related

Unity Picture From Camera Clear in Center, Distorted at Edges

I am fairly new to Unity but am trying to take a photo from the camera and save it. Taking a screen capture is not an option. When I take the photo it appears clear in the center of the photo but gets further distorted towards the edges and I am not sure why. The main camera is linked to the realCamera object in Unity.
The screenshot code, which calls on update is:
DirectoryInfo screenshotDirectory = Directory.CreateDirectory(directoryName);
fileO = original + fileNameEnd + count.ToString() + fileType;
string fullPathO = Path.Combine(screenshotDirectory.FullName, fileO);
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
byte[] bytes;
realCamera.targetTexture = rt;
realCamera.Render();
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
realCamera.targetTexture = null;
RenderTexture.active = null;
Destroy(rt);
bytes = screenShot.EncodeToPNG();
System.IO.File.WriteAllBytes(fullPathO, bytes);

Creating a Sprite in size

I have a gallery scene and I want to load PNG's from Persistence path .
The thing is that I want them as thumbnails, theirs no need for me to load the full size file .
How can I defined the scale of the sprite?
that the relevant line for the creating of the sprite:
Sprite sp1 = Sprite.Create(texture1, new Rect(0, 0, texture1.width, texture1.height), new Vector2(0.5f, 0.5f), 100, 0, SpriteMeshType.FullRect);
and that the texture creating code:
Texture2D takeScreenShotImage(string filePath)
{
Texture2D texture = null;
byte[] fileBytes;
if (File.Exists(filePath))
{
fileBytes = File.ReadAllBytes(filePath);
texture = new Texture2D(1, 1, TextureFormat.ETC2_RGB, false);
texture.LoadImage(fileBytes);
}
return texture;
}
The proper place to make that change is on the Texture2D after loading the Texture. If you really want to load them as thumbnails and want the size to be smaller to save memory then resize it with the Texture2D.Resize function after loading the Texture. The height and width of 60 should be fine. You can then create Sprite from it with Sprite.Create.
Texture2D takeScreenShotImage(string filePath)
{
Texture2D texture = null;
byte[] fileBytes;
if (File.Exists(filePath))
{
fileBytes = File.ReadAllBytes(filePath);
texture = new Texture2D(1, 1, TextureFormat.ETC2_RGB, false);
texture.LoadImage(fileBytes);
}
//RE-SIZE THE Texture2D
texture.Resize(60, 60);
texture.Apply();
return texture;
}
Note that TextureFormat.ETC2_RGB is compressed. To resize it you may have to create a copy of it. See #1 solution from my other post on how to do this.

Resizing a sprite by adding transparent pixels

The reason I wish to do so is that Unity has a nice DTX5 format that reduces the file size by a lot. But to get that, I need a sprite that's size is - both for height and width - a multiple of 4.
So I thought I create a new texture with the desired size, load its pixels with the original's pixels and make a sprite out of it that I save as an asset.
The issue is that while saving the texture works, I get the same texture with the proper sizes, saving the sprite doesn't work. It spits out something, but that isn't even close to what I need.
Here is the code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ResizeSprites
{
public void Resize(Sprite sprite)
{
int _hei, _wid;
//getting the closest higher values that are a multiple of 4.
for (_hei = sprite.texture.height; _hei % 4 != 0; _hei++) ;
for (_wid = sprite.texture.width; _wid % 4 != 0; _wid++) ;
//creating the new texture.
Texture2D tex = new Texture2D(_wid, _hei,TextureFormat.RGBA32,false);
//tex.alphaIsTransparency = true;
//tex.EncodeToPNG();
//giving the new texture the "improper" ratio sprite texture's pixel info
//pixel by pixel.
for (int wid = 0; wid < sprite.texture.width; wid++)
{
for (int hei = 0; hei < sprite.texture.height; hei++)
{
tex.SetPixel(wid, hei, sprite.texture.GetPixel(wid, hei));
}
}
//saving the asset. the save works, was used for both meshes as well as textures.
Sprite n_spr = Sprite.Create(tex,
new Rect(0, 0, tex.width, tex.height),
new Vector2(0.5f, 0.5f), 100.0f);
AssetSaver.CreateAsset(n_spr, sprite.name + "_dtx5");
}
}
And here are my results:
The first one is the original sprite, and the second is what I was given.
Edit: Even if I don't save my creation, just instantiate it as a GameObject, the result is still the same ugly one.
You really don't need all these code.Texture2D has a resize function so just pull the Texture2D from the Sprite then call the re-szie function to re-size it. That's it.
Something like this:
public void Resize(Sprite sprite)
{
Texture2D tex = sprite.texture;
tex.Resize(100, 100, TextureFormat.RGBA32, false);
Sprite n_spr = Sprite.Create(tex,
new Rect(0, 0, tex.width, tex.height),
new Vector2(0.5f, 0.5f), 100.0f);
AssetSaver.CreateAsset(n_spr, sprite.name + "_dtx5");
}
As for you original problem, that's because you did not call the Apply function. Each time you modify the pixels, you are supposed to call the Apply function. Finally, always use GetPixels32 not GetPixel or GetPixels. The reason is because GetPixels32 is extremely faster than the rest of the function.
public void Resize(Sprite sprite)
{
int _hei, _wid;
//getting the closest higher values that are a multiple of 4.
for (_hei = sprite.texture.height; _hei % 4 != 0; _hei++) ;
for (_wid = sprite.texture.width; _wid % 4 != 0; _wid++) ;
//creating the new texture.
Texture2D tex = new Texture2D(_wid, _hei, TextureFormat.RGBA32, false);
//tex.alphaIsTransparency = true;
//tex.EncodeToPNG();
//giving the new texture the "improper" ratio sprite texture's pixel info
//pixel by pixel.
Color32[] color = sprite.texture.GetPixels32();
tex.SetPixels32(color);
tex.Apply();
//saving the asset. the save works, was used for both meshes as well as textures.
Sprite n_spr = Sprite.Create(tex,
new Rect(0, 0, tex.width, tex.height),
new Vector2(0.5f, 0.5f), 100.0f);
AssetSaver.CreateAsset(n_spr, sprite.name + "_dtx5");
}

How To Crop Captured Image? --C#

Is it possible to crop the captured image based on the shape that I want? I'm using raw image + web cam texture to activate the camera and save the image. And I'm using UI Image overlay method as a mask to cover the unwanted parts. I will be attaching the picture to the char model in the latter part. Sorry, I am new to unity. Grateful for your help!
Below is what I have in my code:
// start cam
void Start () {
devices = WebCamTexture.devices;
background = GetComponent<RawImage> ();
devCam = new WebCamTexture ();
background.texture = devCam;
devCam.deviceName = devices [0].name;
devCam.Play ();
}
void OnGUI()
{
GUI.skin = skin;
//swap front and back camera
if (GUI.Button (new Rect ((Screen.width / 2) - 1200, Screen.height - 650, 250, 250),"", GUI.skin.GetStyle("btn1"))) {
devCam.Stop();
devCam.deviceName = (devCam.deviceName == devices[0].name) ? devices[1].name : devices[0].name;
devCam.Play();
}
//snap picture
if (GUI.Button (new Rect ((Screen.width / 2) - 1200, Screen.height - 350, 250, 250), "", GUI.skin.GetStyle ("btn2"))) {
OnSelectCapture ();
//freeze cam here?
}
}
public void OnSelectCapture()
{
imgID++;
string fileName = imgID.ToString () + ".png";
Texture2D snap = new Texture2D (devCam.width, devCam.height);
Color[] c;
c = devCam.GetPixels ();
snap.SetPixels (c);
snap.Apply ();
// Save created Texture2D (snap) into disk as .png
System.IO.File.WriteAllBytes (Application.persistentDataPath +"/"+ fileName, snap.EncodeToPNG ());
}
}
Unless I am not understanding your question correctly, you can just call `devCam.pause!
Update
What you're looking for is basically to copy the pixels from the screen onto a separate image under some condition. So you could use something like this: https://docs.unity3d.com/ScriptReference/Texture2D.EncodeToPNG.html
I'm not 100% sure what you want to do with it exactly but if you want to have an image that you can use as a sprite, for instance, you can scan each pixel and if the pixel colour value is the same as the blue background, swap it for a 100% transparent pixel (0 in the alpha channel). That will give you just the face with the black hair and the ears.
Update 2
The link that I referred you to copies all pixels from the camera view so you don't have to worry about your source image. Here is the untested method, it should work plug and play so long as there is only one background colour else you will need to modify slightly to test for different colours.
IEnumerator GetPNG()
{
// Create a texture the size of the screen, RGB24 format
yield return new WaitForEndOfFrame();
int width = Screen.width;
int height = Screen.height;
Texture2D tex = new Texture2D(width, height, TextureFormat.RGB24, false);
// Read screen contents into the texture
tex.ReadPixels(new Rect(0, 0, width, height), 0, 0);
tex.Apply();
//Create second texture to copy the first texture into minus the background colour. RGBA32 needed for Alpha channel
Texture2D CroppedTexture = new Texture2D(tex.width, tex.height, TextureFormat.RGBA32, false);
Color BackGroundCol = Color.white;//This is your background colour/s
//Height of image in pixels
for(int y=0; y<tex.height; y++){
//Width of image in pixels
for(int x=0; x<tex.width; x++){
Color cPixelColour = tex.GetPixel(x,y);
if(cPixelColour != BackGroundCol){
CroppedTexture.SetPixel(x,y, cPixelColour);
}else{
CroppedTexture.SetPixel(x,y, Color.clear);
}
}
}
// Encode your cropped texture into PNG
byte[] bytes = CroppedTexture.EncodeToPNG();
Object.Destroy(CroppedTexture);
Object.Destroy(tex);
// For testing purposes, also write to a file in the project folder
File.WriteAllBytes(Application.dataPath + "/../CroppedImage.png", bytes);
}

Can't Draw SharpGL(openGL wrapper) object in EMGU CV

I've been working with emgu cv(opencv wrapper to capture images from a webcam and using its functions to proccess this images.
I also detect the hand and track the hand movement...
Now, I need to draw a kind of earth or just an object according to the hand position, for which sharpGL is perfect for perspective transformation and so on. My problem is that I can't achieve that.
I don't know how to say to sharpGL "you guy, draw that object within this hand tracking window"
Is it impossible what I want to do? I am desperate... any help would be great. Thanks in advance
see this video if you're still confused about what I meant (http://www.youtube.com/watch?v=ccL4t36sVvg)
so far, I've just translated this code http://blog.damiles.com/2008/10/opencv-opengl/ into C#
and here's code snippet
private void openGLControl_OpenGLInitialized(object sender, EventArgs e)
{
// TODO: Initialise OpenGL here.
// The texture identifier.
uint[] textures = new uint[1];
// Get the OpenGL object.
OpenGL gl = openGLControl1.OpenGL;
//texture.Create(gl);
// Get one texture id, and stick it into the textures array.
gl.GenTextures(1, textures);
// Bind the texture.
gl.BindTexture(OpenGL.GL_TEXTURE_2D, textures[0]);
// A bit of extra initialisation here, we have to enable textures.
gl.Enable(OpenGL.GL_TEXTURE_2D);
// Specify linear filtering.
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, OpenGL.GL_NEAREST);
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, OpenGL.GL_NEAREST);
gl.PixelStore(OpenGL.GL_UNPACK_ALIGNMENT, 1);
// Set the clear color.
gl.ClearColor(1.0f, 1.0f, 1.0f, 1.0f);
}
private void openGLControl_Resized(object sender, EventArgs e)
{
// TODO: Set the projection matrix here.
// Get the OpenGL object.
OpenGL gl = openGLControl1.OpenGL;
// Set the projection matrix.
gl.MatrixMode(OpenGL.GL_PROJECTION);
// Load the identity.
gl.LoadIdentity();
// Create a perspective transformation.
gl.Perspective(60.0f, (double)Width / (double)Height, 0.01, 100.0);
// Use the 'look at' helper function to position and aim the camera.
gl.LookAt(-5, 5, -5, 0, 0, 0, 0, 1, 0);
// Set the modelview matrix.
gl.MatrixMode(OpenGL.GL_MODELVIEW);
}
and finally draw an 3D object
private void openGLControl_OpenGLDraw(object sender, PaintEventArgs e)
{
// Get the OpenGL object.
OpenGL gl = openGLControl1.OpenGL;
if (capture == null)
{
this.start_capture();
}
if (capture != null)
{
Image<Bgr, Byte> ImageFrame = capture.QueryFrame();
//I'm trying to use some algorithm using the code from sample (sharpGLTextureExample)
//first, I make an Bitmap object that I take from queryframe(convert it to bitmap first)
Bitmap image = new Bitmap(ImageFrame.ToBitmap());
// Clear the color and depth buffer.
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
//ImageFrame.Draw(new Rectangle(2, 2, 2, 2), new Bgr(Color.Aqua), 2);
// Load the identity matrix.
gl.LoadIdentity();
//then, Lock the image bits (so that we can pass them to OGL).
BitmapData bitmapData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, textures[0]);
//gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, (int)OpenGL.GL_RGBA, ImageFrame.Width, ImageFrame.Height, 0, OpenGL.GL_RGBA, OpenGL.GL_UNSIGNED_BYTE, ImageFrame);
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, (int)OpenGL.GL_RGBA, ImageFrame.Width, ImageFrame.Height, 0, OpenGL.GL_RGBA, OpenGL.GL_UNSIGNED_BYTE, bitmapData.Scan0);
//gl.Begin(OpenGL.GL_QUADS);
//gl.TexCoord(0, 0); gl.Vertex(-1, -1, 0);
//gl.TexCoord(1, 0); gl.Vertex(1, -1, 0);
//gl.TexCoord(1, 5); gl.Vertex(1, 1, 0);
//gl.TexCoord(0, 1); gl.Vertex(-1, 1, 0);
//gl.End();
//gl.Flush();
//texture.Bind(gl);
//
//CamImageBox.Image = ImageFrame;
}
}
but the output always return an white, no texture on it...
I've also consindering to use Texture class, but it's no use..because there's no method which the input parameter is the frame...
From your code and help from http://basic4gl.wikispaces.com/2D+Drawing+in+OpenGL, I got SharpGL displaying a video from EmguCV
public partial class FormSharpGLTexturesSample : Form
{
Capture capture;
public FormSharpGLTexturesSample()
{
InitializeComponent();
// Get the OpenGL object, for quick access.
SharpGL.OpenGL gl = this.openGLControl1.OpenGL;
// A bit of extra initialisation here, we have to enable textures.
gl.Enable(OpenGL.GL_TEXTURE_2D);
gl.Disable(OpenGL.GL_DEPTH_TEST);
// Create our texture object from a file. This creates the texture for OpenGL.
capture = new Capture(#"Video file here");
}
private void openGLControl1_OpenGLDraw(object sender, RenderEventArgs e)
{
// Get the OpenGL object, for quick access.
SharpGL.OpenGL gl = this.openGLControl1.OpenGL;
int Width = openGLControl1.Width;
int Height = openGLControl1.Height;
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT);
gl.LoadIdentity();
var frame = capture.QueryFrame();
texture.Destroy(gl);
texture.Create(gl, frame.Bitmap);
// Bind the texture.
texture.Bind(gl);
gl.Begin(OpenGL.GL_QUADS);
gl.TexCoord(0.0f, 0.0f); gl.Vertex(0, 0, 0);
gl.TexCoord(1.0f, 0.0f); gl.Vertex(Width, 0, 0);
gl.TexCoord(1.0f, 1.0f); gl.Vertex(Width, Height, 0);
gl.TexCoord(0.0f, 1.0f); gl.Vertex(0, Height, 0);
gl.End();
gl.Flush();
}
// The texture identifier.
Texture texture = new Texture();
private void openGLControl1_Resized(object sender, EventArgs e)
{
SharpGL.OpenGL gl = this.openGLControl1.OpenGL;
// Create an orthographic projection.
gl.MatrixMode(MatrixMode.Projection);
gl.LoadIdentity();
// NOTE: Basically no matter what I do, the only points I see are those at
// the "near" surface (with z = -zNear)--in this case, I only see green points
gl.Ortho(0, openGLControl1.Width, openGLControl1.Height, 0, 0, 1);
// Back to the modelview.
gl.MatrixMode(MatrixMode.Modelview);
}
}
I hope it helps.
After some experiments, I could use TexImage2D but only with images having a width and height be a power of two
Instead of:
var frame = capture.QueryFrame();
texture.Destroy(gl);
texture.Create(gl, frame.Bitmap);
It can be replaced by the following block to update the data of the picture. I would like to know how to remove the need to call Resize.
var frame = capture.QueryFrame();
frame = frame.Resize(256, 256, Emgu.CV.CvEnum.INTER.CV_INTER_NN);
Bitmap Bitmap = frame.Bitmap;
BitmapData bitmapData = Bitmap.LockBits(new Rectangle(0, 0, Bitmap.Width, Bitmap.Height),
ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, textures[0]);
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_RGBA, Bitmap.Width, Bitmap.Height, 0, OpenGL.GL_BGR, OpenGL.GL_UNSIGNED_BYTE, bitmapData.Scan0);
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, OpenGL.GL_LINEAR); // Required for TexImage2D
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, OpenGL.GL_LINEAR); // Required for TexImage2D

Categories