I am trying to show AR video by connecting the hand tracking camera device (Leap motion: window device manager list: 0) and webcam camera (window device manager list: 1) in Unity.
Leap motion connects using the manufacturer's SDK and package files, so there is no need for additional coding to connect the camera in Unity.
The problem occurs when I connect a webcam camera (window device manager list: 1) in Unity and show AR video.
When the following code is applied to an object, if both Leap motion and webcam camera are connected, leap motion is recognized and output as video, and video output of webcam camera becomes impossible.
If only the webcam is connected after unplugging the leap motion from the PC, the video output of the webcam camera is possible.
I want to output video by selecting webcam camera (window device manager list: 1) on the object with both Leap motion and webcam camera connected to the PC.
Since I am a beginner in Unity, I need to simply modify it in the code below.
Waiting for help.
using UnityEngine;
using System.Collections;
public class WebCam : MonoBehaviour {
// Use this for initialization
void Start () {
WebCamTexture web = new WebCamTexture(1280,720,60);
GetComponent<MeshRenderer>().material.mainTexture = web;
web.Play();
}
// Update is called once per frame
void Update () {
}
}
There is a constructor of WebCamTexture that takes the parameter
deviceName: The name of the video input device to be used.
You can list all available devices via WebCamTexture.devices and get the name like e.g.
var devices = WebCamTexture.devices;
var webcamTexture = new WebCamTexture(devices[1].name);
You might also be able then to filter out the device you need like e.g.
using System.Linq;
...
var device = devices.Select(d => d.name).FirstOrDefault(n => !n.Contains("Leap"));
For finding out how the cameras are called and to be able to filter by name you could print them all like e.g.
Debug.Log(string.Join("\n", devices.Select(d => d.name)));
Theoretically you could even feed them into a dropdown and let the user decide which device to use before creating the WebCamTexture then you wouldn't have to guess the name hardcoded at all ;)
Also note:
Call Application.RequestUserAuthorization before creating a WebCamTexture.
Related
Hello I displayed 1 webcam preview in UWP and that was a success.
But now I want to use 2 camera's preview on my program or be able to choose between the two cameras while connected 2 cameras on computer.
When I run 1 webcam preview, I referred to documentation on using MediaCapture and it was good.
But now I don't know how to display 2 camera previews or select a one between cameras.
Is it impossible?
Yes, it is possible :-) . The MediaCapture class takes the default camera when you call the InitializeAsync method without parameters, but there is another overload that allows you to specify the device ID.
The documentation shows how to discover video capture devices:
DeviceInformationCollection devices =
await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
Now you can initialize multiple MediaCapture instances like this:
foreach ( var device in devices )
{
var mediaInitSettings =
new MediaCaptureInitializationSettings { VideoDeviceId = device.Id };
MediaCapture mediaCapture = new MediaCapture();
mediaCapture.InitializeAsync(mediaInitSettings);
//do something with the media capture
}
Naturally, when you want to display multiple previews, you will need to have multiple CaptureElements, each set to the specific MediaCapture instance you want.
However this approach is quite simplified. To make sure the concurrent capture and preview is supported, you must first ensure to query only cameras that support device profile using MediaCapture.IsVideoProfileSupported method as shown in the documentation and then also check find a concurrency-enabled profile common for both cameras - MediaCapture.FindConcurrentProfiles, see docs. Only then you can safely create the two previews and know the app will not crash.
I am developing an app in unity in which the user can take photos using their device camera. This is working great using Unity's webcamtexture. However, there is no flash support for webcamtexture, so I have written my own code to access the device Torch. The code WORKS - However it doesn't work while streaming the webcamtexture (the camera is in use so the java service call returns an error). Does anyone have any suggestions for how to get around this issue? Is there any way to use Unity's WebCamTexture to activate the camera torch? Here is my code for activating the camera torch:
AndroidJavaClass cameraClass = new AndroidJavaClass("android.hardware.Camera");
// This is an ugly hack to make Unity
// generate Camera permisions
WebCamDevice[] devices = WebCamTexture.devices;
int camID = 0;
camera = cameraClass.CallStatic<AndroidJavaObject>("open", camID);
// I'm pretty sure camera will never be null at this point
// It will either be a valid object or Camera.open would throw an exception
if (camera != null)
{
AndroidJavaObject cameraParameters = camera.Call<AndroidJavaObject>("getParameters");
cameraParameters.Call("setFlashMode","torch");
camera.Call("setParameters",cameraParameters);
Active = true;
}
Try checking out camera capture kit for unity. It gives that functionality you want for Android as well as the source code for it.
https://www.assetstore.unity3d.com/en/#!/content/56673
I am working on a game and want players to be able to place there own ships and islands into the game. I want players to be able to access folders which contain files which the ships use for configuration. How would I make and access these files?
i think you have to explain your question more to get more exact answer but i answer your question best i can.
technically its possible to let players modify directory of game their profile but i dont sujest that. if that was a music or a sound or something like that, you could just make a system for your game to play songs one after other like games like gta or other games menus. you can find that after a little search. but you are speaking of game content that needs to be controlled. is your game 3d or 2d? if its 3d should they be able to work with 3d modeling program? and you have to know after unity makes an output of your game, after players setup that, there will be no basic formats and unity packages them to assets format. its better to make a editor system for your game that players can modify what they want or choose what they want in game, as many games do this way.
this is example from unity official website. you can make your players add their files to your game:
// Opens a file selection dialog for a PNG file and overwrites any
// selected texture with the contents.
class EditorUtilityOpenFilePanel {
#MenuItem("Examples/Overwrite Texture")
static function Apply () {
var texture : Texture2D = Selection.activeObject;
if (texture == null) {
EditorUtility.DisplayDialog(
"Select Texture",
"You Must Select a Texture first!",
"Ok");
return;
}
var path = EditorUtility.OpenFilePanel(
"Overwrite with png",
"",
"png");
if (path.Length != 0) {
var www = WWW("file:///" + path);
www.LoadImageIntoTexture(texture);
}
}
}
This question refers to using the Canon SDK with a video capable DSLR camera. Anyone know if there is way to check if a video is being recorded on the camera ?
I know there is no way to start recording video with the SDK, but maybe there is way to check if one is being recorded right now ?
Thanks.
as the latest canon sdk contais folowing pahagraph:
6.4.3 Begin/End movie shooting
You can begin/end movie shooting with the following operations.
EdsUInt32 record_start = 4; // Begin movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_start), &record_start);
EdsUInt32 record_stop = 0; // End movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_stop), &record_stop);
you may check the property data and if it is set to 4 - than the video is recording.
I have been using the code in http://opensebj.blogspot.com/2009/04/naudio-tutorial-5-recording-audio.html to record audio. Basically this code:
WaveIn waveInStream;
WaveFileWriter writer;
waveInStream = new WaveIn(44100,2);
writer = new WaveFileWriter(outputFilename, waveInStream.WaveFormat);
waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(waveInStream_DataAvailable);
waveInStream.StartRecording();
It works perfectly and record every sound on the system. The problem arises when I pluck in a headset (not usb, just directly into the headset jack on the built in soundcard on my laptop). This has the effect that any sound I can hear in the headset is not recorded.
I think it has something to do with which device i am recording from, but I can't quite figure it out.
I am trying to record a conversation, which means I would like to record the sound that comes from the mic and the sound I can hear in the headset at the same time.
Can someone point me in the right direction for this? Thanks.
// Get default capturer
waveInStream = new WaveIn(44100,2);
Now if you plug in your headset microphone/speaker then Windows will detect it but your program is still using the old 'default' endpoint. Not the lastly added device. This needs to be updated.
One of the solution would be to poll the endpoints and see if any new (default) device has been added to the system and then stop-start the recording with the new device.
waveInStream.StopRecording();
// assign the default recording device if not already done
//waveInStream.DeviceNumber = 0;
waveInStream.StartRecording();
Let me know if I was not very clear in the explanation.