I see a lot of programmers wanting to converting stuff INTO Bitmap, but I'm unable to find a suiting solution to the opposite issue.
I'm using AForge.net with Unity, and I'm trying to test it out by applying my processed image to a cube.
My current code looks like this:
using UnityEngine;
using System.Collections;
using System.Drawing;
using AForge;
using AForge.Imaging;
using AForge.Imaging.Filters;
public class Test : MonoBehaviour {
// Use this for initialization
public Renderer rnd;
public Bitmap grayImage;
public Bitmap image;
public UnmanagedImage final;
public byte[] test;
Texture tx;
void Start () {
image = AForge.Imaging.Image.FromFile("rip.jpg");
Grayscale gs = new Grayscale (0.2125, 0.7154, 0.0721);
grayImage = gs.Apply(image);
final = UnmanagedImage.FromManagedImage(grayImage);
rnd = GetComponent<Renderer>();
rnd.enabled = true;
}
// Update is called once per frame
void Update () {
rnd.material.mainTexture = final;
}
}
I get the following error in the line rnd.material.mainTexture = final;:
Cannot implicitly convert type 'AForge.Imaging.UnmanagedImage' to 'UnityEngine.Texture'
I'm unclear if the Managed to Unmanaged convertion is needed.
By reading your code, the question should be "How to convert UnmanagedImage to Texture or Texture2D" since UnmanagedImage(final variable) stores the converted image from UnmanagedImage.FromManagedImage.
UnmanagedImage has a property called ImageData which returns IntPtr.
Luckily, Texture2D, has at least, two functions that loads textures from IntPtr.
Your final variable is a type of UnmanagedImage.
1.Use Texture2D's constructor Texture2D.CreateExternalTexture and it's complimentary function UpdateExternalTexture.
Texture2D convertedTx;
//Don't initilize Texture2D in the Update function. Do in the Start function
convertedTx = Texture2D.CreateExternalTexture (1024, 1024, TextureFormat.ARGB32 , false, false, final.ImageData);
//Convert UnmanagedImage to Texture
convertedTx.UpdateExternalTexture(final.ImageData);
rnd.material.mainTexture = convertedTx;
2.Use Texture2D's LoadRawTextureData and it's complimentary function Apply.
Texture2D convertedTx;
//Don't initilize Texture2d in int the Update function. Do in the Start function
convertedTx = new Texture2D(16, 16, TextureFormat.PVRTC_RGBA4, false);
int w = 16;
int h = 16;
int size = w*h*4;
//Convert UnmanagedImage to Texture
convertedTx.LoadRawTextureData(final.ImageData, size);
convertedTx.Apply(); //Must call Apply after calling LoadRawTextureData
rnd.material.mainTexture = convertedTx;
Related
I'm trying to take a smaller image mat and copy it into larger mat so I can resize it while keeping the aspect ratio of the image. So, basically this:
So far, this is the code I've written:
private Mat MakeMatFrame(Texture2D image)
{
// Texture must be of right input size
Mat img_mat = new Mat(image.height, image.width, CvType.CV_8UC4, new Scalar(0, 0, 0, 255));
texture2DToMat(image, img_mat);
return img_mat;
}
private void letterBoxImage(Texture2D image)
{
// Get input image as a mat
Mat source = MakeMatFrame(image);
// Create the mat that the source will be put in
int col = source.cols();
int row = source.rows();
int _max = Math.Max(col, row);
Mat resized = Mat.zeros(_max, _max, CvType.CV_8UC4);
// Fill resized
Mat roi = new Mat(resized, new Rect(0, 0, col, row));
source.copyTo(roi);
Texture2D tex2d = new Texture2D(resized.cols(), resized.rows());
matToTexture2D(resized, tex2d);
rawImage.texture = tex2d;
}
Everything I've looked at tells me this is the right approach to take (get a region of interest, fill it in). But instead of getting that third image with the children above the gray region, I just have a gray region.
In other words, the image isn't copying over properly. I've trying using a submat as well, but it failed miserably.
I've been looking for C# code on how to do this sort of thing with OpenCv For Unity, but I can only find C++ code. Which tells me to do exactly this.
Is there some sort of "apply changes" function I'm unaware of for Mats? Am I selecting the region of interest incorrectly? Or is it something else?
sorry for my english,but ur code has a bug.
Mat roi = new Mat(resized, new Rect(0, 0, col, row));
image copied to roi,but this mat not related with resized Mat.so u have to do like this:
Rect roi=new Rect(0,0,width,height);
source.copyto(resized.submat(roi));
I'm trying to generate a QR Code using ZXing.Net and at first I have the problem where the .Save() is not working because of an error CS1061. So, I scratched that idea then I tried to save .Write() as image then render it in unity but Unity returns an error:
Cannot implicitly convert type 'UnityEngine.Color32[]' to 'UnityEngine.Sprite'
I tried using the answer from here where they used Sprite.Create() as a solution but converted a Texture2D instead of a Color32[ ] but I wasn't able to confirm if the code worked for me since the code returns an error that:
The type or namespace name 'Image' could not be found
As I said, I wasn't able to find out if the code really worked or not. I don't know what caused the namespace error since the script I'm using is under the Image UI.
These are the codes I'm using:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using ZXing;
using ZXing.QrCode;
using System.Drawing;
public class SampleScript : MonoBehaviour
{
public Texture2D myTexture;
Sprite mySprite;
Image myImage;
void Main()
{
var qrWriter = new BarcodeWriter();
qrWriter.Format = BarcodeFormat.QR_CODE;
this.gameObject.GetComponent<SpriteRenderer>().sprite = qrWriter.Write("text");
}
public void FooBar()
{
mySprite = Sprite.Create(myTexture, new Rect(0.0f, 0.0f, myTexture.width, myTexture.height), new Vector2(0.5f, 0.5f), 100.0f);
myImage.sprite = mySprite;
}
void Start()
{
FooBar();
Main();
}
I still haven't tested this code since the errors must first be resolved before running.
The first
The type or namespace name 'Image' could not be found
is fixed by adding the according namespace
using UnityEngine.UI;
at the top of your file.
The exception
Cannot implicitly convert type 'UnityEngine.Color32[]' to 'UnityEngine.Sprite'
Can't simply be "fixed". It is as the exception tells you: You can't implicitly convert between those types .. not even explicitly.
qrWriter.Write("text");
returns the Color32[].
What you can try to do is creating a Texture using this color information BUT you will allways have to know the pixel dimensions of the target texture.
Then you can use Texture2D.SetPixels32 like
var texture = new Texture2D(HIGHT, WIDTH);
texture.SetPixels32(qrWriter.Write("text"));
texture.Apply();
this.gameObject.GetComponent<SpriteRenderer>().sprite = Sprite.Create(texture, new Rect(0,0, texture.width, texture.height), Vector2.one * 0.5f, 100);
Possible that you also will actively have to pass in the EncodingOptions in order to set the desired pixel dimensions as shown in this blog:
using ZXing.Common;
...
BarcodeWriter qrWriter = new BarcodeWriter
{
Format = BarcodeFormat.QR_CODE,
Options = new EncodingOptions
{
Height = height,
Width = width
}
};
Color32[] pixels = qrWriter.Write("text");
Texture2D texture = new Texture2D(width, height);
texture.SetPixels32(pixels);
texture.Apply();
there you can also find some more useful information about threading and scaling of the texture etc.
I'm trying to access image pixels by position i have been use byte array for accessing but it does not give the correct position of x,y like python image[x][y] is there any better way to access pixels?
i have used opencv plugin in unity,visual studio and cannot access them
public texture2D image;
Mat imageMat = new Mat(image.height, image.width, CvType.CV_8UC4);
Utils.texture2DToMat(image, imageMat); // actually converts texture2d to matrix
byte[] imageData = new byte[(int)(imageMat.total() * imageMat.channels())]; // pixel data of image
imageMat.get(0, 0, imageData);// gets pixel data
pixel=imageData[(y * imageMat.cols() + x) * imageMat.channels() + r]
y and x are pixel values in the code and r is the channel but i'm not able to
access a particular value of x and y with that code
There is no usual way to do it because operation is really slow. But some trick to do it is you can make screen texture from 'Camera' class.
After you make texture, you can use texture.GetPixel(x,y)
public class Example : MonoBehaviour
{
// Take a "screenshot" of a camera's Render Texture.
Texture2D RTImage(Camera camera)
{
// The Render Texture in RenderTexture.active is the one
// that will be read by ReadPixels.
var currentRT = RenderTexture.active;
RenderTexture.active = camera.targetTexture;
// Render the camera's view.
camera.Render();
// Make a new texture and read the active Render Texture into it.
Texture2D image = new Texture2D(camera.targetTexture.width, camera.targetTexture.height);
image.ReadPixels(new Rect(0, 0, camera.targetTexture.width, camera.targetTexture.height), 0, 0);
image.Apply();
// Replace the original active Render Texture.
RenderTexture.active = currentRT;
return image;
}
}
I need help for converting Emgucv videoCapture image to LoadRawTextureData of unity to display images/videos in Unity3d 2018.1.
I am able to display images but it show strange lines and distorted image effects or some kind of shuttering/scattering.
I read the question in this post and apply the solution but the problem is not solved Convert Mat to Texture2d(Stackoverflow)
I think the Colorspace of Emgucv image did not matched with the texture2d.
Code:
void Start () {
vc = new VideoCapture(0);
vc.FlipVertical = true; //The image I am getting from webcam is flipped
myMaterial = GetComponent<Renderer>().material;
frameHeight = (int)vc.GetCaptureProperty(Emgu.CV.CvEnum.CapProp.FrameHeight);
frameWidth = (int)vc.GetCaptureProperty(Emgu.CV.CvEnum.CapProp.FrameWidth);
camera = new Texture2D(frameHeight,frameWidth, TextureFormat.RGB24, false,false);
}
I am getting images in this part of code and converting the color space as per texture2d format.
void Update () {
Mat secondimage = new Mat();
Mat myimage = vc.QueryFrame();
CvInvoke.CvtColor(myimage, secondimage, Emgu.CV.CvEnum.ColorConversion.Bgr2Rgb);
Image<Rgb, byte> hello = secondimage.ToImage<Rgb, byte>();
camera.LoadRawTextureData(hello.bytes);
camera.Apply();
myMaterial.mainTexture = camera;
}
Results:
The Result is kind of strange,take a look on following result images.
This is my hand holding in Front of camera.This image is orignal texture2d image.
Image1
This image is displayed on the 3d plane in unity3d.
Image2
As code explain i already Convert image Bgr2Rgb.
update:
I simply exchange the width and height in texture2d and wow it worked.
camera = new Texture2D(frameWidth, frameHeight, TextureFormat.RGB24,false,false);
I've did a lot of research, but I can't find a suitable solution that works with Unity3d/c#. I'm using a Fove-HMD and would like to record/make a video of the integrated camera. So far I managed every update to take a snapshot of the camera, but I can't find a way to merge this snapshots into a video. Does someone know a way of converting them? Or can someone point me in the right direction, in which I could continue my research?
public class FoveCamera : SingletonBase<FoveCamera>{
private bool camAvailable;
private WebCamTexture foveCamera;
private List<Texture2D> snapshots;
void Start ()
{
//-------------just checking if webcam is available
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
Debug.LogError("FoveCamera could not be found.");
camAvailable = false;
return;
}
foreach (WebCamDevice device in devices)
{
if (device.name.Equals("FOVE Eyes"))
foveCamera = new WebCamTexture(device.name);//screen.width and screen.height
}
if (foveCamera == null)
{
Debug.LogError("FoveCamera could not be found.");
return;
}
//-------------camera found, start with the video
foveCamera.Play();
camAvailable = true;
}
void Update () {
if (!camAvailable)
{
return;
}
//loading snap from camera
Texture2D snap = new Texture2D(foveCamera.width,foveCamera.height);
snap.SetPixels(foveCamera.GetPixels());
snapshots.Add(snap);
}
}
The code works so far. The first part of the Start-Method is just for finding and enabling the camera. In the Update-Method I'm taking every update a snapshot of the video.
After I "stop" the Update-Method, I would like to convert the gathered Texture2D object into a video.
Thanks in advance
Create MediaEncoder
using UnityEditor; // VideoBitrateMode
using UnityEditor.Media; // MediaEncoder
var vidAttr = new VideoTrackAttributes
{
bitRateMode = VideoBitrateMode.Medium,
frameRate = new MediaRational(25),
width = 320,
height = 240,
includeAlpha = false
};
var audAttr = new AudioTrackAttributes
{
sampleRate = new MediaRational(48000),
channelCount = 2
};
var enc = new MediaEncoder("sample.mp4", vidAttr, audAttr);
Convert each snapshot to Texture2D
Call consequently AddFrame to add each snapshot to MediaEncoder
enc.AddFrame(tex);
Once done call Dispose to close the file
enc.Dispose();
I see two methods here, one is fast to implement, dirty and not for all platforms, second one harder but pretty. Both rely on FFMPEG.
1) Save every frame into image file (snap.EncodeToPNG()) and then call FFMPEG to create video from images (FFmpeg create video from images) - slow due to many disk operations.
2) Use FFMPEG via wrapper implemented in AForge and supply its VideoFileWriter class with images that you have.
Image sequence to video stream?
Problem here is it uses System.Bitmap, so in order to convert Texture2D to Bitmap you can use: How to create bitmap from byte array?
So you end up with something like:
Bitmap bmp;
Texture2D snap;
using (var ms = new MemoryStream(snap.EncodeToPNG()))
{
bmp = new Bitmap(ms);
}
vFWriter.WriteVideoFrame(bmp);
Both methods are not the fastest ones though, so if performance is an issue here you might want to operate on lower level data like DirectX or OpenGL textures.