I am trying to access HoloLens camera using WebCamTexture, it works fine on standalone app, I am passing the frames to DLL for image processing but when deployed on HoloLens the communication between the script and the DLL is perfect it also works fine. The problem is I am not able to render the frames on 3D plane.
I tried this code on standalone app as well as on hololens. If tried this code without calling the DLL it worked for me. But when passing the frames to DLL the the 3D plane is missing.
// Use this for initialization
void Start ()
{
webcamTexture = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = webcamTexture;
webcamTexture.Play();
data = new Color32[webcamTexture.width * webcamTexture.height];
}
Expected result: I want to display live video on 3D plane on HoloLens.
Related
I have the following setup right now:
"Startup" scene which initializes all managers (spatial mapping, cursors, camera, ...)
"Main" scene with animations and one object which moves around
Once the users interacts with the program the scene restarts at some point, then the following should happen:
Object's coordinates get stored (E.g. DontDestroyOnLoad() on a "storage object". I need to destroy the object since it needs to reset itself on scene restart)
Scene restarts
Object gets recreated with stored coordinates
Code:
DataPositionStorer positionStorer = new GameObject("DataPositionStorer").AddComponent<DataPositionStorer>();
DontDestroyOnLoad(positionStorer.gameObject);
// dataPoints is the specific object I need to keep the coordinates of
positionStorer.Position = dataPoints.transform.position;
positionStorer.Rotation = dataPoints.transform.rotation;
positionStorer.Zoom = dataPoints.transform.localScale;
// Reload current scene ("Main")
SceneManager.LoadScene(SceneManager.GetActiveScene().name);
And in the startup method of the object:
DataPositionStorer positionStorer = FindObjectOfType<DataPositionStorer>();
if (positionStorer!= null)
{
dataPoints.transform.position = positionStorer.Position;
dataPoints.transform.localRotation = positionStorer.Rotation;
dataPoints.transform.localScale = positionStorer.Zoom;
}
This setup works in the unity player but once I run it in the hololens emulator / on a hololens the position afterwards isn't correct anymore.
Can somebody help me resolve this problem on the emulator / hololens?
Thanks to the two guys commenting I've found a solution:
Load the WorldAnchorManager Script in the Setup scene and make it DontDestroyOnLoad
Attach a World Anchor before the scene restarts
Load the World Anchor after the scene restarted
This however only works on the emulator / HoloLens itself. In the Unity Editor World Anchors don't work! So just copy paste the coordinates over like I did in my initial post.
I am developing an mobile application for Android an iOS with Unity to display panoramic images.
The images are downloaded and saved in the local storage of the device. One image is about 5mb to 10mb.
I am using WWW.LoadImageIntoTexture at start of the app. But if I have more than seven images the app crashes. I figured out that LoadImageToTexture creates images with RGBA32 format. So each texture has about 100mb. So the crash is caused by full memory.
I tried Texture2d.Compress() but it only compresses to DXT1 or DTX5. Android can not handle DXT format.
New idea is to preload the image I want to view next. (Only one image is visible)
I tried to load the texture with a Coroutine. But the app stucks while using WWW.LoadImageIntoTexture about one second. So I tried threading. But WWW.LoadImageIntoTexture can not be executed outside the unity-main-thread.
Is there a way to execute WWW.LoadImageIntoTexture asynchronous? I tried with UniRX from the AssetStore:
public void StartPreloadPanorama(string targetid){
preloadFinished = false;
StartCoroutine (PreloadHandler ());
}
IEnumerator PreloadHandler(){
var preload = Observable.FromCoroutine(PreloadPanorama)
.Subscribe();
while (!preloadFinished) {
yield return null;
}
}
IEnumerator PreloadPanorama(){
//texture is a public string
WWW preloadTexturer = new WWW("file://"+texture);
Texture2D texTemp = new Texture2D (8192, 4096, TextureFormat.RGBA32, true);
preloadTexturer.LoadImageIntoTexture(texTemp);
preloadFinished = true;
//edit - there is some more code after that:
//preloadRenderer is a public Renderer attached to the gameobject displaying the panorama image
preloadRenderer.material.mainTexture = texTemp;
Resources.UnloadUnusedAssets();
Debug.Log("FINISHED");
yield return null;
}
But I still get the stucking...
Any ideas? Thanks!
Edit: according to the comments here is my setup:
I have two spheres, one active, one disabled. (GameObject.SetActive(false))
My camera is inside the spheres. Camera and the two spheres have the position 0,0,0 - you can move the camera by gyro.
There are some buttons placed inside the spheres. If you gaze at a button for 2 seconds the active sphere is disabled and the other sphere is activated. By starting the gaze the StartPreloadPanorama() function is called and a new picture is attached to the disabled sphere.
I want to change the textures of 3D model at run time in unity by using AR camera.
I have done this work by using main camera but by using AR camera its not working.
I attach this script to a cube and create a button and add the cube in onClick function of button and its work fine by using unity Main camera but its not working with AR Camera. Here's my script:
using UnityEngine;using System.Collections;public class TexturesSwap : MonoBehaviour {
public Texture[] textures;
public int currentTexture;
void Start () {
}
public void swapTexture() { currentTexture++;
currentTexture %= textures.Length;
GetComponent<Renderer>().material.mainTexture = textures[currentTexture];
}}
For swap textures I have textures folder with some textures and through script I am able to change the texture of object when I am using Main camera but how to do the same thing by using AR camera? Please help me. Thanks.
Here's my error
You need to add a Renderer to object. For this..Create a material in assets (right click on assets folder and click material). Attach this material to object. And then set texture in code:
transform.GetComponent<Renderer>().material.texture="yourTexture"
I was recently using the WebCamTexture API in Unity, but ran across a couple issues.
My biggest issue is orientation. When I ran the app on my phone, it wouldn't work properly in portrait mode: the image was orientated so it was landscape while the phone was portrait. Rotating the phone didn't help.
Then I changed the default orientation of the app to landscape and it worked, but the image was reflected: letters and such would be backwards in the image. Rotating the image 180 on the y-axis didn't help since it was a 1 sided image.
Here's the code for the camera alone:
cam = new WebCamTexture();
camImage.texture = cam;
camImage.material.mainTexture = cam;
cam.Play ();
camImage.transform.localScale = new Vector3(-1,-1,1);
where camImage is a RawImage.
How would I rotate the image to work correctly in portrait as well as reflecting the image correctly? Am I using the API incorrectly?
important -
for some years now, it is really only practical to use "NatCam" now for camera work, either ios or droid, in Unity3D - every app which uses the camera, uses it.
There is no other realistic solution, until Unity actually do the job and include (working :) ) camera software.
solution is basically this ...
you have to do this EVERY FRAME, you can't do it "when the camera starts".
Unity f'd up and the actual values only arrive after a second or so. it's a well-known problem
private void _orient()
{
float physical = (float)wct.width/(float)wct.height;
rawImageARF.aspectRatio = physical;
float scaleY = wct.videoVerticallyMirrored ? -1f : 1f;
rawImageRT.localScale = new Vector3(1f, scaleY, 1f);
int orient = -wct.videoRotationAngle;
rawImageRT.localEulerAngles = new Vector3(0f,0f,orient);
showOrient.text = orient.ToString();
}
Im new to xna. I'm creating a platform game for windows pc only and somehow needs to play an avi file
how can I do this on xna 4.0 or c# 2010?
anyone can help? :)
you can try this creating video player instance
Video video;
VideoPlayer player;
Texture2D videoTexture;
protected override void LoadContent()
{
// Create a new SpriteBatch, which can be used to draw textures.
spriteBatch = new SpriteBatch(GraphicsDevice);
video = Content.Load<Video>("video");
player = new VideoPlayer();
}
here is the source of article with more examples
http://msdn.microsoft.com/en-us/library/dd904199.aspx
here are couple more dll
http://scurvymedia.codeplex.com/
http://xnadsplayer.codeplex.com/
This blog has some working code that you can download with xna media players. The project actually puts video on a 3d object (perhaps a little overkill for what you are trying to do), but I can imagine you could study the code that does the video playback to see how it works. There is a VideoPlayer class inside the project; maybe you can give that a look.