This question already has answers here:
Can I take a photo in Unity using the device's camera?
(8 answers)
Closed 1 year ago.
I am trying to capture what the camera sees in phone in an AR app and take a photo of it. What I found was to take a screenshot of the device and then save that as an image. However, I want to take a screenshot of what the camera sees instead of the screen. That is without any 2D or 3D elements created in the application. Just purely what the camera sees. How do I do this?
public void Start() {
StartCoroutine ("SaveImage");
}
WaitForEndOfFrame frameEnd = new WaitForEndOfFrame ();
IEnumerator SaveImage() {
// Create a texture the size of the screen, RGB24 format
int width = Screen.width;
int height = Screen.height;
yield return frameEnd;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
// Read screen contents into the texture
tex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
tex.Apply ();
byte[] bytes = tex.EncodeToPNG ();
Destroy (tex);
var form = new WWWForm ();
form.AddField ("plant", plantComponentID);
form.AddBinaryData ("image", bytes, "screenShot.png", "image/png");
yield return null;
}
Interesting question.
What I would try is to set up a RenderTexture, and tell a camera in your scene to render to it. Then I would try to use the
ImageConversion class to get my camera eye screenshot to a file.
This is just what my attempt would be. Not sure if that is the correct way, so providing that just in case it helps. I would be glad to know the outcome of your attempt in case you try it :)
Related
I'm trying to access image pixels by position i have been use byte array for accessing but it does not give the correct position of x,y like python image[x][y] is there any better way to access pixels?
i have used opencv plugin in unity,visual studio and cannot access them
public texture2D image;
Mat imageMat = new Mat(image.height, image.width, CvType.CV_8UC4);
Utils.texture2DToMat(image, imageMat); // actually converts texture2d to matrix
byte[] imageData = new byte[(int)(imageMat.total() * imageMat.channels())]; // pixel data of image
imageMat.get(0, 0, imageData);// gets pixel data
pixel=imageData[(y * imageMat.cols() + x) * imageMat.channels() + r]
y and x are pixel values in the code and r is the channel but i'm not able to
access a particular value of x and y with that code
There is no usual way to do it because operation is really slow. But some trick to do it is you can make screen texture from 'Camera' class.
After you make texture, you can use texture.GetPixel(x,y)
public class Example : MonoBehaviour
{
// Take a "screenshot" of a camera's Render Texture.
Texture2D RTImage(Camera camera)
{
// The Render Texture in RenderTexture.active is the one
// that will be read by ReadPixels.
var currentRT = RenderTexture.active;
RenderTexture.active = camera.targetTexture;
// Render the camera's view.
camera.Render();
// Make a new texture and read the active Render Texture into it.
Texture2D image = new Texture2D(camera.targetTexture.width, camera.targetTexture.height);
image.ReadPixels(new Rect(0, 0, camera.targetTexture.width, camera.targetTexture.height), 0, 0);
image.Apply();
// Replace the original active Render Texture.
RenderTexture.active = currentRT;
return image;
}
}
I am trying to create an application that generates a bitmap image every frame based on user actions and have it display that image to the screen. I would like the application to also be able to update that image in unity in real time as soon as the user makes another action.
I have created an application that does this and it works. However, it is veryyyy slow. My Update() method is attached below.
My idea was:
Capture user data (mouse location).
Convert that data into a special signal format that another program recognizes.
Have that program return a bitmap image.
Use that bitmap as a texture and update the existing texture with the new image.
Code:
UnityEngine.Texture2D oneTexture;
Bitmap currentBitmap;
private int frameCount = 0;
void Update()
{
// Show mouse position in unity environment
double xValue = Input.mousePosition.x;
double yValue = Screen.height - Input.mousePosition.y;
myPoints = "" + xValue + "," + yValue + Environment.NewLine;
// Show heatmap being recorded.
signals = Program.ConvertStringToSignalsList(myPoints);
currentBitmap = Program.CreateMouseHeatmap(Screen.width, Screen.height, signals);
// Update old heatmap texture.
UpdateTextureFromBitmap();
ri.texture = oneTexture;
ri.rectTransform.sizeDelta = new Vector2(Screen.width, Screen.height);
frameCount++;
// Write points to Database.
StartCoroutine(WriteToDB(xValue, yValue)); // <<<<< Comment out when playback.
}
private void UpdateTextureFromBitmap()
{
// Convert Bitmap object into byte array instead of creating actual
// .bmp image file each frame.
byte[] imageBytes = ImageToBytes(currentBitmap);
BMPLoader loader = new BMPLoader();
BMPImage img = loader.LoadBMP(imageBytes);
// Only initialize the Texture once.
if (frameCount == 0)
{
oneTexture = img.ToTexture2D();
}
else
{
Color32[] imageData = img.imageData;
oneTexture.SetPixels32(imageData);
oneTexture.Apply();
}
}
I was wondering if someone could help me improve the rate at which the image updates to the screen? I know that it is possible to make this program much faster but I am so new to unity and C# that I don't know how to make that happen. Also if there is a completely different way that I should be going about doing this then I am open to that too. Any help would be appreciated. Thanks!
Also, below is a screenshot of the Profiler showing the breakdown of CPU Usage. Currently it looks like every frame is taking about 500ms.
I've been trying to take a screenshot and then immediately after, use it to show some sort of preview and some times it works and some times it doesn't, I'm currently not at work and I don't have unity in this computer so I'll try to recreate it on the fly (there might be some syntax mistakes here and there)
public GameObject screenshotPreview;
public void TakeScreenshot () {
string imageName = "screenshot.png";
// Take the screenshot
ScreenCapture.CaptureScreenshot (imageName);
// Read the data from the file
byte[] data = File.ReadAllBytes(Application.persistentDataPath + "/" + imageName);
// Create the texture
Texture2D screenshotTexture = new Texture2D(Screen.width, Screen.height);
// Load the image
screenshotTexture.LoadImage(data);
// Create a sprite
Sprite screenshotSprite = Sprite.Create (screenshotTexture, new Rect(0, 0, Screen.width, Screen.height), new Vector2(0.5f, 0.5f) );
// Set the sprite to the screenshotPreview
screenshotPreview.GetComponent<Image> ().sprite = screenshotSprite;
}
As far as I've read, ScreenCapture.CaptureScreenshot is not async so the image should have been written right before I try to load the data, but the problem is as I've said before some times it doesn't work and it loads an 8x8 texture with a red question mark, which apparently is the texture failing to be loaded but the file should've been there so I cannot understand why it's not getting loaded properly.
another thing I've tried (which is disgusting but I'm getting tired of this and running out of ideas) is to put in the update method to wait for some time and then execute the code to load the data and create the texture, sprite and display it but even so, it fails some times, less frequently than before but it still fails, which leads me to belive that even if the file was created it hasn't finish beign written, does anyone know a workaround to this? any advice is appreciated.
As extra information this project is being run in an iOS device.
The ScreenCapture.CaptureScreenshot function is known to have many problems. Here is another one of it.
Here is a quote from its doc:
On Android this function returns immediately. The resulting screenshot
is available later.
The iOS behavior is not documented but we can just assume that the behavior is the-same on iOS. Wait for few frames after taking the screenshot before you attempt to read/load it.
public IEnumerator TakeScreenshot()
{
string imageName = "screenshot.png";
// Take the screenshot
ScreenCapture.CaptureScreenshot(imageName);
//Wait for 4 frames
for (int i = 0; i < 5; i++)
{
yield return null;
}
// Read the data from the file
byte[] data = File.ReadAllBytes(Application.persistentDataPath + "/" + imageName);
// Create the texture
Texture2D screenshotTexture = new Texture2D(Screen.width, Screen.height);
// Load the image
screenshotTexture.LoadImage(data);
// Create a sprite
Sprite screenshotSprite = Sprite.Create(screenshotTexture, new Rect(0, 0, Screen.width, Screen.height), new Vector2(0.5f, 0.5f));
// Set the sprite to the screenshotPreview
screenshotPreview.GetComponent<Image>().sprite = screenshotSprite;
}
Note that you must use StartCoroutine(TakeScreenshot()); to call this function.
If that did not work, don't use this function at-all. Here is another way to take and save screenshot in Unity:
IEnumerator captureScreenshot()
{
yield return new WaitForEndOfFrame();
string path = Application.persistentDataPath + "Screenshots/"
+ "_" + screenshotCount + "_" + Screen.width + "X" + Screen.height + "" + ".png";
Texture2D screenImage = new Texture2D(Screen.width, Screen.height);
//Get Image from screen
screenImage.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0);
screenImage.Apply();
//Convert to png
byte[] imageBytes = screenImage.EncodeToPNG();
//Save image to file
System.IO.File.WriteAllBytes(path, imageBytes);
}
Programmer's code worked successfully by being called like the following. It is designed as coroutine, so it would not interfere frame rate. Hence it should be called as coroutine. Make sure the CallerObject is inheriting from "MonoBehaviour".
public class CallerObject : MonoBehaviour
{
public void Caller()
{
String imagePath = Application.persistentDataPath + "/image.png";
StartCoroutine(captureScreenshot(imagePath));
}
IEnumerator captureScreenshot(String imagePath)
{
yield return new WaitForEndOfFrame();
//about to save an image capture
Texture2D screenImage = new Texture2D(Screen.width, Screen.height);
//Get Image from screen
screenImage.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0);
screenImage.Apply();
Debug.Log(" screenImage.width" + screenImage.width + " texelSize" + screenImage.texelSize);
//Convert to png
byte[] imageBytes = screenImage.EncodeToPNG();
Debug.Log("imagesBytes=" + imageBytes.Length);
//Save image to file
System.IO.File.WriteAllBytes(imagePath, imageBytes);
}
}
I see nothing in the docs that says its not Async. In fact, for Android (if I'm reading this correctly), it explicitly says it's async.
That said, I'd try stalling while the file is not found. Throw it in a coroutine and
FileInfo yourFile = new FileInfo("YourFile.png");
while (File.Exists(yourFile.name) || IsFileLocked(yourFile))
yield return null;
IsFileLocked
You could also try throwing in some debug checks in there to see how long it takes (seconds or frames) before the file appears (assuming it ever appears).
Edit: As derHugo pointed out, the file existing doesn't mean the file is ready yet. I have edited the code to handle that! But it still doesn't cover the case where the file already existed, in which case you probably want a dynamic file name like with a timestamp, or you want to delete the file first!
I've did a lot of research, but I can't find a suitable solution that works with Unity3d/c#. I'm using a Fove-HMD and would like to record/make a video of the integrated camera. So far I managed every update to take a snapshot of the camera, but I can't find a way to merge this snapshots into a video. Does someone know a way of converting them? Or can someone point me in the right direction, in which I could continue my research?
public class FoveCamera : SingletonBase<FoveCamera>{
private bool camAvailable;
private WebCamTexture foveCamera;
private List<Texture2D> snapshots;
void Start ()
{
//-------------just checking if webcam is available
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
Debug.LogError("FoveCamera could not be found.");
camAvailable = false;
return;
}
foreach (WebCamDevice device in devices)
{
if (device.name.Equals("FOVE Eyes"))
foveCamera = new WebCamTexture(device.name);//screen.width and screen.height
}
if (foveCamera == null)
{
Debug.LogError("FoveCamera could not be found.");
return;
}
//-------------camera found, start with the video
foveCamera.Play();
camAvailable = true;
}
void Update () {
if (!camAvailable)
{
return;
}
//loading snap from camera
Texture2D snap = new Texture2D(foveCamera.width,foveCamera.height);
snap.SetPixels(foveCamera.GetPixels());
snapshots.Add(snap);
}
}
The code works so far. The first part of the Start-Method is just for finding and enabling the camera. In the Update-Method I'm taking every update a snapshot of the video.
After I "stop" the Update-Method, I would like to convert the gathered Texture2D object into a video.
Thanks in advance
Create MediaEncoder
using UnityEditor; // VideoBitrateMode
using UnityEditor.Media; // MediaEncoder
var vidAttr = new VideoTrackAttributes
{
bitRateMode = VideoBitrateMode.Medium,
frameRate = new MediaRational(25),
width = 320,
height = 240,
includeAlpha = false
};
var audAttr = new AudioTrackAttributes
{
sampleRate = new MediaRational(48000),
channelCount = 2
};
var enc = new MediaEncoder("sample.mp4", vidAttr, audAttr);
Convert each snapshot to Texture2D
Call consequently AddFrame to add each snapshot to MediaEncoder
enc.AddFrame(tex);
Once done call Dispose to close the file
enc.Dispose();
I see two methods here, one is fast to implement, dirty and not for all platforms, second one harder but pretty. Both rely on FFMPEG.
1) Save every frame into image file (snap.EncodeToPNG()) and then call FFMPEG to create video from images (FFmpeg create video from images) - slow due to many disk operations.
2) Use FFMPEG via wrapper implemented in AForge and supply its VideoFileWriter class with images that you have.
Image sequence to video stream?
Problem here is it uses System.Bitmap, so in order to convert Texture2D to Bitmap you can use: How to create bitmap from byte array?
So you end up with something like:
Bitmap bmp;
Texture2D snap;
using (var ms = new MemoryStream(snap.EncodeToPNG()))
{
bmp = new Bitmap(ms);
}
vFWriter.WriteVideoFrame(bmp);
Both methods are not the fastest ones though, so if performance is an issue here you might want to operate on lower level data like DirectX or OpenGL textures.
currently i'm downloading an image from web via WWW. That works perfect for all targeted platforms excepting iOS.
On iOS the image appears just black.
Here is the code:
public void receiveData(WWW receivedData)
{
image.sprite = Sprite.Create(receivedData.texture, new Rect(new Vector2(0,0), new Vector2(50, 50)), new Vector2(0.5f, 0.5f));
image.color = Color.white;
}
I've been trying around for some time now without any results...
F.E. i tried to change the format of the texture with textureFormat or creating a new Texture2D and changed the pixel. But all results in a black image.
Does anyone have an idea what the matter is?
Best Regards
It is likely black because the WWW is not yet finished downloading on iOS. That code should be in a coroutine function and you have to wait with yield return receivedData. If this does not solve your problem, you have to post the rest of your code but this is likely the problem.
public IEnumerator receiveData(WWW receivedData)
{
//Wait to finish downloading image
yield return receivedData;
//You can now create sprite from the data image
image.sprite = Sprite.Create(receivedData.texture, new Rect(new Vector2(0,0), new Vector2(50, 50)), new Vector2(0.5f, 0.5f));
image.color = Color.white;
}
How large is the image? Is it possible you are exceeding the maximum allowed image size for the hardware?