Accessing dimensions (width/height) from a VideoPlayer source URL - c#

I am working on a Unity project that requires me to download a video file (mp4 format) and play it using the VideoPlayer component. Because the file is downloaded at runtime, I effectively have to "stream" it via a url to its downloaded location, as opposed to loading it as a VideoClip.
To accurately size the target RenderTexture and GameObject the video will be playing to, I need the dimensions of the video file itself. Because it is not a VideoClip I cannot use:
VideoClip clip = videoPlayer.clip;
float videoWidth = clip.width;
float videoHeight = clip.height;
Because I am using a source URL as opposed to a VideoClip, this will return null.
Can I get the video dimensions directly from the file somehow?

You can retrieve these information from the Texture that VideoPlayer constructs.
Get VideoPlayer
VideoPlayer videoPlayer = GetComponent<VideoPlayer>();
Get the Texture VideoPlayer
Texture vidTex = videoPlayer.texture;
Get VideoPlayer dimension width/height
float videoWidth = vidTex.width;
float videoHeight = vidTex.height;
Make sure to only get the texture after videoPlayer.isPrepared is true. See my other answer for full code on how to play video make it display on RawImage component.

Related

How to set the dimension of my Video Player Unity 2D

So, I'm new to Unity and am trying to set the size of a Video Player, but this task seems so dificult.
Basically I'm loading my video from file using this:
videoPlayer.url = path;
Where the path is a variable that has the path of the video I'm using.
When my video is loaded into the player, it takes all the screen and I can't find a way to set the player dimension to like 100x100.
Found a way to set the dimension using a second camera.
Put my videoplayer inside the cam and set its viewport rect to they way I needed.
I think there has to be an easier way but this worked!

WebCamTexture is not being rendered on a 3D plane

I am trying to access HoloLens camera using WebCamTexture, it works fine on standalone app, I am passing the frames to DLL for image processing but when deployed on HoloLens the communication between the script and the DLL is perfect it also works fine. The problem is I am not able to render the frames on 3D plane.
I tried this code on standalone app as well as on hololens. If tried this code without calling the DLL it worked for me. But when passing the frames to DLL the the 3D plane is missing.
// Use this for initialization
void Start ()
{
webcamTexture = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = webcamTexture;
webcamTexture.Play();
data = new Color32[webcamTexture.width * webcamTexture.height];
}
Expected result: I want to display live video on 3D plane on HoloLens.

How very odd, Unity sprite render works on some images but not others

Here's the error.
ArgumentException: Could not create sprite (0.000000, 0.000000, 750.000000, 346.000000) from a 500x332 texture.
UnityEngine.Sprite.Create (UnityEngine.Texture2D texture, Rect rect, Vector2 pivot) (at /Users/builduser/buildslave/unity/build/artifacts/generated/common/runtime/SpritesBindings.gen.cs:89)
dynamicScrollingList.saveImage (System.String path, System.Byte[] imageBytes, UnityEngine.GameObject thispanel) (at Assets/Scripts/Toolkit/dynamicScrollingList.cs:183)
I'm editing images in photoshop, saving as a jpg or png then uploading to wordpress, the images are then pulled via www request into unity and assigned to a texture.
It was all working fine until today when I uploaded a different jpg for a logo. Nothing special about it, it seems that when I edit in photoshop and then upload they don't work, however if I pull a random image from google and upload it they pull in and are assigned to the texture in unity successfully.
Here's the unity / monodevelop
byte[] imageBytes = loadImage(savePath);
//Loaded so assign it
Texture2D texture;
texture = new Texture2D(750, 346);
texture.LoadImage(imageBytes);
Image im = imagePanelItem.GetComponent<Image>();
im.sprite = Sprite.Create (texture, new Rect (0, 0, 750, 346), new Vector2 ());//set the Rect with position and dimensions as you need.
It works with some of my edited photos but not others, do you know what I'm doing wrong in the process? Are there any issues with assigning jpgs or pngs to textures if they aren't saved in a magical way or something.
Very odd. Help appreciated.

Unity Android and iOS handle textures at runtime

I am developing an mobile application for Android an iOS with Unity to display panoramic images.
The images are downloaded and saved in the local storage of the device. One image is about 5mb to 10mb.
I am using WWW.LoadImageIntoTexture at start of the app. But if I have more than seven images the app crashes. I figured out that LoadImageToTexture creates images with RGBA32 format. So each texture has about 100mb. So the crash is caused by full memory.
I tried Texture2d.Compress() but it only compresses to DXT1 or DTX5. Android can not handle DXT format.
New idea is to preload the image I want to view next. (Only one image is visible)
I tried to load the texture with a Coroutine. But the app stucks while using WWW.LoadImageIntoTexture about one second. So I tried threading. But WWW.LoadImageIntoTexture can not be executed outside the unity-main-thread.
Is there a way to execute WWW.LoadImageIntoTexture asynchronous? I tried with UniRX from the AssetStore:
public void StartPreloadPanorama(string targetid){
preloadFinished = false;
StartCoroutine (PreloadHandler ());
}
IEnumerator PreloadHandler(){
var preload = Observable.FromCoroutine(PreloadPanorama)
.Subscribe();
while (!preloadFinished) {
yield return null;
}
}
IEnumerator PreloadPanorama(){
//texture is a public string
WWW preloadTexturer = new WWW("file://"+texture);
Texture2D texTemp = new Texture2D (8192, 4096, TextureFormat.RGBA32, true);
preloadTexturer.LoadImageIntoTexture(texTemp);
preloadFinished = true;
//edit - there is some more code after that:
//preloadRenderer is a public Renderer attached to the gameobject displaying the panorama image
preloadRenderer.material.mainTexture = texTemp;
Resources.UnloadUnusedAssets();
Debug.Log("FINISHED");
yield return null;
}
But I still get the stucking...
Any ideas? Thanks!
Edit: according to the comments here is my setup:
I have two spheres, one active, one disabled. (GameObject.SetActive(false))
My camera is inside the spheres. Camera and the two spheres have the position 0,0,0 - you can move the camera by gyro.
There are some buttons placed inside the spheres. If you gaze at a button for 2 seconds the active sphere is disabled and the other sphere is activated. By starting the gaze the StartPreloadPanorama() function is called and a new picture is attached to the disabled sphere.

How can I properly use WebCamTexture in Unity iOS?

I was recently using the WebCamTexture API in Unity, but ran across a couple issues.
My biggest issue is orientation. When I ran the app on my phone, it wouldn't work properly in portrait mode: the image was orientated so it was landscape while the phone was portrait. Rotating the phone didn't help.
Then I changed the default orientation of the app to landscape and it worked, but the image was reflected: letters and such would be backwards in the image. Rotating the image 180 on the y-axis didn't help since it was a 1 sided image.
Here's the code for the camera alone:
cam = new WebCamTexture();
camImage.texture = cam;
camImage.material.mainTexture = cam;
cam.Play ();
camImage.transform.localScale = new Vector3(-1,-1,1);
where camImage is a RawImage.
How would I rotate the image to work correctly in portrait as well as reflecting the image correctly? Am I using the API incorrectly?
important -
for some years now, it is really only practical to use "NatCam" now for camera work, either ios or droid, in Unity3D - every app which uses the camera, uses it.
There is no other realistic solution, until Unity actually do the job and include (working :) ) camera software.
solution is basically this ...
you have to do this EVERY FRAME, you can't do it "when the camera starts".
Unity f'd up and the actual values only arrive after a second or so. it's a well-known problem
private void _orient()
{
float physical = (float)wct.width/(float)wct.height;
rawImageARF.aspectRatio = physical;
float scaleY = wct.videoVerticallyMirrored ? -1f : 1f;
rawImageRT.localScale = new Vector3(1f, scaleY, 1f);
int orient = -wct.videoRotationAngle;
rawImageRT.localEulerAngles = new Vector3(0f,0f,orient);
showOrient.text = orient.ToString();
}

Categories