I've been trying to take a screenshot and then immediately after, use it to show some sort of preview and some times it works and some times it doesn't, I'm currently not at work and I don't have unity in this computer so I'll try to recreate it on the fly (there might be some syntax mistakes here and there)
public GameObject screenshotPreview;
public void TakeScreenshot () {
string imageName = "screenshot.png";
// Take the screenshot
ScreenCapture.CaptureScreenshot (imageName);
// Read the data from the file
byte[] data = File.ReadAllBytes(Application.persistentDataPath + "/" + imageName);
// Create the texture
Texture2D screenshotTexture = new Texture2D(Screen.width, Screen.height);
// Load the image
screenshotTexture.LoadImage(data);
// Create a sprite
Sprite screenshotSprite = Sprite.Create (screenshotTexture, new Rect(0, 0, Screen.width, Screen.height), new Vector2(0.5f, 0.5f) );
// Set the sprite to the screenshotPreview
screenshotPreview.GetComponent<Image> ().sprite = screenshotSprite;
}
As far as I've read, ScreenCapture.CaptureScreenshot is not async so the image should have been written right before I try to load the data, but the problem is as I've said before some times it doesn't work and it loads an 8x8 texture with a red question mark, which apparently is the texture failing to be loaded but the file should've been there so I cannot understand why it's not getting loaded properly.
another thing I've tried (which is disgusting but I'm getting tired of this and running out of ideas) is to put in the update method to wait for some time and then execute the code to load the data and create the texture, sprite and display it but even so, it fails some times, less frequently than before but it still fails, which leads me to belive that even if the file was created it hasn't finish beign written, does anyone know a workaround to this? any advice is appreciated.
As extra information this project is being run in an iOS device.
The ScreenCapture.CaptureScreenshot function is known to have many problems. Here is another one of it.
Here is a quote from its doc:
On Android this function returns immediately. The resulting screenshot
is available later.
The iOS behavior is not documented but we can just assume that the behavior is the-same on iOS. Wait for few frames after taking the screenshot before you attempt to read/load it.
public IEnumerator TakeScreenshot()
{
string imageName = "screenshot.png";
// Take the screenshot
ScreenCapture.CaptureScreenshot(imageName);
//Wait for 4 frames
for (int i = 0; i < 5; i++)
{
yield return null;
}
// Read the data from the file
byte[] data = File.ReadAllBytes(Application.persistentDataPath + "/" + imageName);
// Create the texture
Texture2D screenshotTexture = new Texture2D(Screen.width, Screen.height);
// Load the image
screenshotTexture.LoadImage(data);
// Create a sprite
Sprite screenshotSprite = Sprite.Create(screenshotTexture, new Rect(0, 0, Screen.width, Screen.height), new Vector2(0.5f, 0.5f));
// Set the sprite to the screenshotPreview
screenshotPreview.GetComponent<Image>().sprite = screenshotSprite;
}
Note that you must use StartCoroutine(TakeScreenshot()); to call this function.
If that did not work, don't use this function at-all. Here is another way to take and save screenshot in Unity:
IEnumerator captureScreenshot()
{
yield return new WaitForEndOfFrame();
string path = Application.persistentDataPath + "Screenshots/"
+ "_" + screenshotCount + "_" + Screen.width + "X" + Screen.height + "" + ".png";
Texture2D screenImage = new Texture2D(Screen.width, Screen.height);
//Get Image from screen
screenImage.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0);
screenImage.Apply();
//Convert to png
byte[] imageBytes = screenImage.EncodeToPNG();
//Save image to file
System.IO.File.WriteAllBytes(path, imageBytes);
}
Programmer's code worked successfully by being called like the following. It is designed as coroutine, so it would not interfere frame rate. Hence it should be called as coroutine. Make sure the CallerObject is inheriting from "MonoBehaviour".
public class CallerObject : MonoBehaviour
{
public void Caller()
{
String imagePath = Application.persistentDataPath + "/image.png";
StartCoroutine(captureScreenshot(imagePath));
}
IEnumerator captureScreenshot(String imagePath)
{
yield return new WaitForEndOfFrame();
//about to save an image capture
Texture2D screenImage = new Texture2D(Screen.width, Screen.height);
//Get Image from screen
screenImage.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0);
screenImage.Apply();
Debug.Log(" screenImage.width" + screenImage.width + " texelSize" + screenImage.texelSize);
//Convert to png
byte[] imageBytes = screenImage.EncodeToPNG();
Debug.Log("imagesBytes=" + imageBytes.Length);
//Save image to file
System.IO.File.WriteAllBytes(imagePath, imageBytes);
}
}
I see nothing in the docs that says its not Async. In fact, for Android (if I'm reading this correctly), it explicitly says it's async.
That said, I'd try stalling while the file is not found. Throw it in a coroutine and
FileInfo yourFile = new FileInfo("YourFile.png");
while (File.Exists(yourFile.name) || IsFileLocked(yourFile))
yield return null;
IsFileLocked
You could also try throwing in some debug checks in there to see how long it takes (seconds or frames) before the file appears (assuming it ever appears).
Edit: As derHugo pointed out, the file existing doesn't mean the file is ready yet. I have edited the code to handle that! But it still doesn't cover the case where the file already existed, in which case you probably want a dynamic file name like with a timestamp, or you want to delete the file first!
Related
This question already has answers here:
Can I take a photo in Unity using the device's camera?
(8 answers)
Closed 1 year ago.
I am trying to capture what the camera sees in phone in an AR app and take a photo of it. What I found was to take a screenshot of the device and then save that as an image. However, I want to take a screenshot of what the camera sees instead of the screen. That is without any 2D or 3D elements created in the application. Just purely what the camera sees. How do I do this?
public void Start() {
StartCoroutine ("SaveImage");
}
WaitForEndOfFrame frameEnd = new WaitForEndOfFrame ();
IEnumerator SaveImage() {
// Create a texture the size of the screen, RGB24 format
int width = Screen.width;
int height = Screen.height;
yield return frameEnd;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
// Read screen contents into the texture
tex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
tex.Apply ();
byte[] bytes = tex.EncodeToPNG ();
Destroy (tex);
var form = new WWWForm ();
form.AddField ("plant", plantComponentID);
form.AddBinaryData ("image", bytes, "screenShot.png", "image/png");
yield return null;
}
Interesting question.
What I would try is to set up a RenderTexture, and tell a camera in your scene to render to it. Then I would try to use the
ImageConversion class to get my camera eye screenshot to a file.
This is just what my attempt would be. Not sure if that is the correct way, so providing that just in case it helps. I would be glad to know the outcome of your attempt in case you try it :)
I am trying to create an application that generates a bitmap image every frame based on user actions and have it display that image to the screen. I would like the application to also be able to update that image in unity in real time as soon as the user makes another action.
I have created an application that does this and it works. However, it is veryyyy slow. My Update() method is attached below.
My idea was:
Capture user data (mouse location).
Convert that data into a special signal format that another program recognizes.
Have that program return a bitmap image.
Use that bitmap as a texture and update the existing texture with the new image.
Code:
UnityEngine.Texture2D oneTexture;
Bitmap currentBitmap;
private int frameCount = 0;
void Update()
{
// Show mouse position in unity environment
double xValue = Input.mousePosition.x;
double yValue = Screen.height - Input.mousePosition.y;
myPoints = "" + xValue + "," + yValue + Environment.NewLine;
// Show heatmap being recorded.
signals = Program.ConvertStringToSignalsList(myPoints);
currentBitmap = Program.CreateMouseHeatmap(Screen.width, Screen.height, signals);
// Update old heatmap texture.
UpdateTextureFromBitmap();
ri.texture = oneTexture;
ri.rectTransform.sizeDelta = new Vector2(Screen.width, Screen.height);
frameCount++;
// Write points to Database.
StartCoroutine(WriteToDB(xValue, yValue)); // <<<<< Comment out when playback.
}
private void UpdateTextureFromBitmap()
{
// Convert Bitmap object into byte array instead of creating actual
// .bmp image file each frame.
byte[] imageBytes = ImageToBytes(currentBitmap);
BMPLoader loader = new BMPLoader();
BMPImage img = loader.LoadBMP(imageBytes);
// Only initialize the Texture once.
if (frameCount == 0)
{
oneTexture = img.ToTexture2D();
}
else
{
Color32[] imageData = img.imageData;
oneTexture.SetPixels32(imageData);
oneTexture.Apply();
}
}
I was wondering if someone could help me improve the rate at which the image updates to the screen? I know that it is possible to make this program much faster but I am so new to unity and C# that I don't know how to make that happen. Also if there is a completely different way that I should be going about doing this then I am open to that too. Any help would be appreciated. Thanks!
Also, below is a screenshot of the Profiler showing the breakdown of CPU Usage. Currently it looks like every frame is taking about 500ms.
I have made an app that takes pictures of the screen and saves it in Gallery with name i have given using DateTime class and my own prefix. The code on Android works perfect, when you press the button it takes the screenshot and it finishes everything. But the story is not the same on iOS. It crashes whenever the button is pressed. This is the code :
Flash.SetActive(true);
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
cam.targetTexture = rt;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
cam.Render();
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
cam.targetTexture = null;
RenderTexture.active = null;
Destroy(rt);
byte[] bytes = screenShot.EncodeToPNG();
filename = ScreenShotName(resWidth, resHeight);
FullSharePath = "/storage/emulated/0/GarderobaShots/" + filename + ".png";
Texture2D textu = new Texture2D(900,1320, TextureFormat.RGBAFloat, false);
textu.LoadImage(bytes);
textu.Apply();
spr = Sprite.Create(textu ,new Rect(0.0f, 0.0f, textu.width, textu.height), new Vector2(1f, 1f), 100.0f);
ShareAbleObject.transform.GetChild(1).gameObject.GetComponent<Image> ().sprite = spr;
ShareAbleObject.SetActive(true);
NativeGallery.SaveImageToGallery(textu, "AmazingGirlsShots", filename + ".png");
takeHiResShot = false;
Just a quick introduction to code It switches MainCamera of the scene with some camera i have(it's smaller and it has different canvas(only text)) then it activates object i have called Flash which is white flash on screen,it goes off by itself Afterwards it render texture from screen and saves it in
Texture2D textu
Full Share path is the path where this file will be stored so the user wants, he can share it or see it from game
NativeGallery.SaveImageToGallery(textu, "AmazingGirlsShots", filename".png");
It's plugin I took for ios and android that refreshes gallery for an image to be shown in the gallery, otherwise its just somewhere in phone I think that cant be the problem as its check plugin that many people use.
Be sure to set the Permission Privacy Settings in Info.plist.
I've did a lot of research, but I can't find a suitable solution that works with Unity3d/c#. I'm using a Fove-HMD and would like to record/make a video of the integrated camera. So far I managed every update to take a snapshot of the camera, but I can't find a way to merge this snapshots into a video. Does someone know a way of converting them? Or can someone point me in the right direction, in which I could continue my research?
public class FoveCamera : SingletonBase<FoveCamera>{
private bool camAvailable;
private WebCamTexture foveCamera;
private List<Texture2D> snapshots;
void Start ()
{
//-------------just checking if webcam is available
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
Debug.LogError("FoveCamera could not be found.");
camAvailable = false;
return;
}
foreach (WebCamDevice device in devices)
{
if (device.name.Equals("FOVE Eyes"))
foveCamera = new WebCamTexture(device.name);//screen.width and screen.height
}
if (foveCamera == null)
{
Debug.LogError("FoveCamera could not be found.");
return;
}
//-------------camera found, start with the video
foveCamera.Play();
camAvailable = true;
}
void Update () {
if (!camAvailable)
{
return;
}
//loading snap from camera
Texture2D snap = new Texture2D(foveCamera.width,foveCamera.height);
snap.SetPixels(foveCamera.GetPixels());
snapshots.Add(snap);
}
}
The code works so far. The first part of the Start-Method is just for finding and enabling the camera. In the Update-Method I'm taking every update a snapshot of the video.
After I "stop" the Update-Method, I would like to convert the gathered Texture2D object into a video.
Thanks in advance
Create MediaEncoder
using UnityEditor; // VideoBitrateMode
using UnityEditor.Media; // MediaEncoder
var vidAttr = new VideoTrackAttributes
{
bitRateMode = VideoBitrateMode.Medium,
frameRate = new MediaRational(25),
width = 320,
height = 240,
includeAlpha = false
};
var audAttr = new AudioTrackAttributes
{
sampleRate = new MediaRational(48000),
channelCount = 2
};
var enc = new MediaEncoder("sample.mp4", vidAttr, audAttr);
Convert each snapshot to Texture2D
Call consequently AddFrame to add each snapshot to MediaEncoder
enc.AddFrame(tex);
Once done call Dispose to close the file
enc.Dispose();
I see two methods here, one is fast to implement, dirty and not for all platforms, second one harder but pretty. Both rely on FFMPEG.
1) Save every frame into image file (snap.EncodeToPNG()) and then call FFMPEG to create video from images (FFmpeg create video from images) - slow due to many disk operations.
2) Use FFMPEG via wrapper implemented in AForge and supply its VideoFileWriter class with images that you have.
Image sequence to video stream?
Problem here is it uses System.Bitmap, so in order to convert Texture2D to Bitmap you can use: How to create bitmap from byte array?
So you end up with something like:
Bitmap bmp;
Texture2D snap;
using (var ms = new MemoryStream(snap.EncodeToPNG()))
{
bmp = new Bitmap(ms);
}
vFWriter.WriteVideoFrame(bmp);
Both methods are not the fastest ones though, so if performance is an issue here you might want to operate on lower level data like DirectX or OpenGL textures.
When getting video input from a webcam via WebCamTexture the bottom row of the returned image is completely black (RGB = 0,0,0).
I have tried several different webcams and get the same result with all of them.
I do get a correct image when using the Windows 10 Camera app and also when getting a webcam feed in Processing or Java.
The black line (always 1 pixel high and as wide as the image) appears when showing video on the canvas, saving a snapshot to disk and also when looking directly at the pixel data with GetPixels32().
Here is the black-line at the Bottom of the picture image:
I have confirmed that the image returned is the correct size, i.e. the black row is not an extra row. It's always the lowest line of the image that is black.
I have included the c# code I'm using below.
What is the cause of this black line and is there a way to avoid it?
I have looked for any information on this issue but not found anything online. I'm a complete beginner at Unity and would be grateful for any help.
I'm using Unity version 5.6.2 but had the same issue with 5.5
public class CamController : MonoBehaviour
{
private WebCamTexture webcamTexture;
private WebCamDevice[] devices;
void Start()
{
//start webcam
webcamTexture = new WebCamTexture();
devices = WebCamTexture.devices;
webcamTexture.deviceName = devices[0].name;
webcamTexture.Play();
}
void Update()
{
//if user presses C capture cam image
if (Input.GetKeyDown(KeyCode.C))
captureImage();
}
void captureImage()
{
//get webcam pixels
Color32[] camPixels;
camPixels = webcamTexture.GetPixels32();
//print pixel data for first and second (from bottom) lines of image to console
for (int y = 0; y < 2; y++)
{
Debug.Log("Line: " + y);
for (int x = 0; x < webcamTexture.width; x++)
{
Debug.Log(x + " - " + camPixels[y * webcamTexture.width + x]);
}
}
//save webcam image as png
Texture2D brightBGTexture = new Texture2D(webcamTexture.width, webcamTexture.height);
brightBGTexture.SetPixels32(camPixels, 0);
brightBGTexture.Apply();
byte[] pngBytes = brightBGTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/../camImage.png", pngBytes);
}
}
After calling SetPixels32, you must call Texture2D.Apply to apply the changes to the Texture2D.
You should that before encoding the Texture2D to png.
//save webcam image as png
Texture2D brightBGTexture = new Texture2D(webcamTexture.width, webcamTexture.height);
brightBGTexture.SetPixels32(camPixels, 0);
brightBGTexture.Apply();
byte[] pngBytes = brightBGTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/../camImage.png", pngBytes);
EDIT:
Even with calling Texture2D.Apply() the problem is still there. This is a bug with the WebCamTexture API and you should file for a bug report through the Editor.