How can I properly use WebCamTexture in Unity iOS? - c#

I was recently using the WebCamTexture API in Unity, but ran across a couple issues.
My biggest issue is orientation. When I ran the app on my phone, it wouldn't work properly in portrait mode: the image was orientated so it was landscape while the phone was portrait. Rotating the phone didn't help.
Then I changed the default orientation of the app to landscape and it worked, but the image was reflected: letters and such would be backwards in the image. Rotating the image 180 on the y-axis didn't help since it was a 1 sided image.
Here's the code for the camera alone:
cam = new WebCamTexture();
camImage.texture = cam;
camImage.material.mainTexture = cam;
cam.Play ();
camImage.transform.localScale = new Vector3(-1,-1,1);
where camImage is a RawImage.
How would I rotate the image to work correctly in portrait as well as reflecting the image correctly? Am I using the API incorrectly?

important -
for some years now, it is really only practical to use "NatCam" now for camera work, either ios or droid, in Unity3D - every app which uses the camera, uses it.
There is no other realistic solution, until Unity actually do the job and include (working :) ) camera software.
solution is basically this ...
you have to do this EVERY FRAME, you can't do it "when the camera starts".
Unity f'd up and the actual values only arrive after a second or so. it's a well-known problem
private void _orient()
{
float physical = (float)wct.width/(float)wct.height;
rawImageARF.aspectRatio = physical;
float scaleY = wct.videoVerticallyMirrored ? -1f : 1f;
rawImageRT.localScale = new Vector3(1f, scaleY, 1f);
int orient = -wct.videoRotationAngle;
rawImageRT.localEulerAngles = new Vector3(0f,0f,orient);
showOrient.text = orient.ToString();
}

Related

Unity2D Different Screen Resolution Problem

There is a game I made on Unity, but my game does not support different screens. I searched it online but all solutions contain "canvas" but I do not use canvas in my project.
my game works in 16:9 landscape resolution
What can I do about it
1280x720 screen resolution
2960x1440 screen resolution
You can write a script and attach it to the background that changes its dimensions using the resolution in use.
Example :
int width = Screen.Width;
int height = Screen.Height;
transform.localScale = new Vector3(width, height, tranform.localScale.z);
Just edit the code and use Vector2 for 2D objects.

Positioning a Gameobject to stay in screen center always

I'm trying to position a gameobject to stay at the center of the screen always. I'm using the following code to do so,
sphere.SetActive(true);
Vector3 lookAtPosition = FirstPersonCamera.ScreenToWorldPoint(new Vector3(Screen.width / 2, Screen.height / 2, FirstPersonCamera.nearClipPlane));
sphere.transform.position = lookAtPosition;
But for some reason, the gameobject is not visible at all with the above code.
So, I tried to raycast it and make it visible.
Following is the corresponding code,
TrackableHitFlags raycastFilter = TrackableHitFlags.PlaneWithinPolygon | TrackableHitFlags.FeaturePointWithSurfaceNormal;
TrackableHit hit;
if (Frame.Raycast(screenCenter.x, screenCenter.y, raycastFilter, out hit))
{
var pose = hit.Pose;
sphere.SetActive(true);
sphere.transform.position = pose.position;
sphere.transform.up = pose.up;
}
The gameobject shows up occasionally with the above code but it is not centered exactly to the screen and it is not showing up forever. How can I be able to sort it out?
It could be that your object is on top of the camera and outside in its field of view
A problem on its sorting layer
A screenshot of your inspector and scene window might help.
The easiest way to do it is to use the following code (this is an example code for ARCore SDK in Android Studio):
static const Pose oneMeterAway = Pose.makeTranslation(0, 0, -1);
objectPose = camera.getPose().extractTranslation().compose(oneMeterAway);
And then you need to update it every frame.
Hope this helps.

WebCamTexture is not being rendered on a 3D plane

I am trying to access HoloLens camera using WebCamTexture, it works fine on standalone app, I am passing the frames to DLL for image processing but when deployed on HoloLens the communication between the script and the DLL is perfect it also works fine. The problem is I am not able to render the frames on 3D plane.
I tried this code on standalone app as well as on hololens. If tried this code without calling the DLL it worked for me. But when passing the frames to DLL the the 3D plane is missing.
// Use this for initialization
void Start ()
{
webcamTexture = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = webcamTexture;
webcamTexture.Play();
data = new Color32[webcamTexture.width * webcamTexture.height];
}
Expected result: I want to display live video on 3D plane on HoloLens.

XNA - Sprite Ghost Trail

I am currently trying to create an ghosting trail similar to the one given in this gif.
I have tried creating multiple instances of my Player object based on the previous frame sequentially (ghostimage = player; on the current frame, ghostimage2 = afterimage; afterimage = player; on the next frame, etc.), but to no avail.
I even tried using the solution here, but localized (only encompassing the needed areas). It didn't work.
Drawing new Player objects with transparency (multiplying Color.White by some float value between 0f and 1f) didn't work well either.
How do I create this effect?

Unity - Camera ScreenToWorldPoint returning odd values

The main camera's output is set to a render texture, which is applied to a material, which is applied to a quad that's scaled up to 128x72. The secondary camera is set to only see what is rendered to the child quad, who has the material with the render texture on it.
However Camera.main.ScreenToWorldPoint(Input.mousePosition) is returning values that aren't even close to the GameObject. I.E. The GameObject is instantiated at (0, 0, 0), and hovering over it shows the mouse at (307, 174). Moving the Rotating Object to the right edge of the screen will only return an x position of 64 (half of the 128px wide quad) so I'm not sure where the 300+ is coming from. Not sure if the quad/camera set up is responsible for this.
EDIT: Using a single orthographic camera, all properties the same except for using a render texture, instead of the setup I have now results in accurate ScreenToWorldPoint output.
The Input.mousePosition property will only return the x and y axis of the mouse position in pixels.
ScreenToWorldPoint requires the z axis too which Input.mousePosition doesn't provide. The z-axis value supposed to be the nearClipPlane of the camera. It will give you a position that's right in front of the camera.
Depending on the size of the 3D object you want to instantiate where mouse button is pressed, you will need to apply an offset to it to make it totally visible to the screen. For a simple cube created in Unity, an offset of 2 is fine. Anything bigger than that, you will need to increase the offset.
Below is a complete example of how to properly use ScreenToWorldPoint with Camera.nearClipPlane and an offset to instantiate a 3D object where mouse is clicked:
public GameObject prefab;
public float offset = 2f;
void Update()
{
if (Input.GetMouseButtonDown(0))
{
Camera cam = Camera.main;
Vector2 mousePos = Vector3.zero;
mousePos.x = Input.mousePosition.x;
mousePos.y = Input.mousePosition.y;
Vector3 worldPoint = cam.ScreenToWorldPoint(new Vector3(mousePos.x, mousePos.y, cam.nearClipPlane + offset));
Instantiate(prefab, worldPoint, Quaternion.identity);
}
}
You may not be calling the Camera.ScreenToWorldPoint method correctly. In particular, the z position of the screen position parameter that's passed to this method should be defined as world units from the camera. See the Unity documentation on Camera.ScreenToWorldPoint.
Instead of Camera.main.ScreenToWorldPoint(Input.mousePosition), I think this is the correct way to call Camera.ScreenToWorldPoint:
var cameraPosition = Camera.main.transform.position;
// assuming `transform` is the transform "Virtual Screen Quad"...
float zWorldDistanceFromCamera = transform.position.z - cameraPosition.z;
var screenPoint = new Vector3(Input.mousePosition.x, Input.mousePosition.y, zWorldDistanceFromCamera);
var worldPoint = Camera.main.ScreenToWorldPoint(screenPoint);
Debug.LogFormat("mousePosition: {0} | zWorldDistanceFromCamera: {1} | worldPoint: {2}",
Input.mousePosition,
zWorldDistanceFromCamera,
worldPoint.ToString("F3"));
(If this isn't working, could you update your question or reply to this post with a comment with details showing the values that are logged at each step?)
I was just struggling with this problem and this question helped me find the answer, so thank you for posting it!
The issue has nothing to do with the z axis or how you're calling Camera.ScreenToWorldPoint. The issue is that the camera you're calling it on is rendering to a RenderTexture, and the dimensions of the RT don't match the dimensions of your game window. I wasn't able to find the implementation of the method in the reference source, but whatever it's doing is dependent on the resolution of the RenderTexture.
To test this, click the stats button in the game window to display the game window's screen size. The coordinates you get will match the ratio between that and the RenderTexture resolution.
Solutions:
Don't call this method on a camera targeting a rendertexture, either target the screen (none) or create a child camera that matches the position of the camera you need
Match the RT resolution to the screen. Obviously this may have performance implications, or cause issues if the screen size changes.
Don't use Camera.ScreenToWorldPoint. Depending on the use case, using a raycast may be simpler or more reliable.
Since using a default camera was returning the correct values, I simply added another one to detect the mouse position independent of the render texture/quad setup.

Categories