I got a script that should be able to take more than one photo. I am using PhotoCapture and get an error making it unable to capture a second photo. I get the error "Value cannot be null" on the photoCaptureObject.StartPhotoModeAsync(cameraParameters, result => row, but I don't understand why this is.
I have commented away the photoCaptureObject = null; row so that the photoCaptureObject should not be null. The row if (photoCaptureObject == null) return; also proofs that the photoCaptureObjectis not null.
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
public string path = "";
CameraParameters cameraParameters = new CameraParameters();
private void Awake()
{
var cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
// Create a PhotoCapture object
PhotoCapture.CreateAsync(false, captureObject =>
{
photoCaptureObject = captureObject;
cameraParameters.hologramOpacity = 0.0f;
cameraParameters.cameraResolutionWidth = cameraResolution.width;
cameraParameters.cameraResolutionHeight = cameraResolution.height;
cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
});
}
private void Update()
{
// if not initialized yet don't take input
if (photoCaptureObject == null) return;
if (Input.GetKeyDown(KeyCode.K) || Input.GetKeyDown("k"))
{
Debug.Log("k was pressed");
VuforiaBehaviour.Instance.gameObject.SetActive(false);
// Activate the camera
photoCaptureObject.StartPhotoModeAsync(cameraParameters, result =>
{
if (result.success)
{
// Take a picture
photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
}
else
{
Debug.LogError("Couldn't start photo mode!", this);
}
});
}
}
There is some code here in between that change the photo taken and so on, but I don't think that that code is part of the problem.
private void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
{
// Shutdown the photo capture resource
VuforiaBehaviour.Instance.gameObject.SetActive(true);
photoCaptureObject.Dispose();
//photoCaptureObject = null;
Debug.Log("Photomode stopped");
}
So what else could be null? Is it the StartPhotoModeAsync somehow? How can I fix this issue?
Thanks!
Okay so I understand now thanks to Henriks comment.
Unity specifically says this about StartPhotoModeAsync:
Only one PhotoCapture instance can start the photo mode at any given
time
I have focused more on the sentence after saying that one should always use PhotoCapture.StopPhotoModeAsyncbecause having PhotoCaptureMode on takes more power so I never thought about that the instance wouldn't start again after stopping it.
Now I only have TakePhotoAsyncin the update on key press and do not stop the PhotoMode due to that the app that I am making should always be able to capture photos.
Related
I am trying to merge multiple clips(videos) into one using AVMutableCompositions, I have successfully done this as well as rotating and translating each instruction, however, there is still one issue that remains.
When the first clip finishes the output freezes at its last frame (the last frame of the first clip); this only happens if there is another clip visible, so, for example, if I were to set the opacity of the second and third clips to 0 at CMTime.Zero and the first one to 0 at firstClip.Duration, the result would be a video that displays the first clip's video, and once this finishes it displays a black background.
The clips' audio works perfectly.
Here is my code:
public void TESTING()
{
//microphone
AVCaptureDevice microphone = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Audio);
AVMutableComposition mixComposition = AVMutableComposition.Create();
AVVideoCompositionLayerInstruction[] Instruction_Array = new AVVideoCompositionLayerInstruction[Clips.Count];
foreach (string clip in Clips)
{
var asset = AVUrlAsset.FromUrl(new NSUrl(clip, false)) as AVUrlAsset;
#region HoldVideoTrack
//This range applies to the video, not to the mixcomposition
CMTimeRange range = new CMTimeRange()
{
Start = CMTime.Zero,
Duration = asset.Duration
};
var duration = mixComposition.Duration;
NSError error;
AVMutableCompositionTrack videoTrack = mixComposition.AddMutableTrack(AVMediaType.Video, 0);
AVAssetTrack assetVideoTrack = asset.TracksWithMediaType(AVMediaType.Video)[0];
videoTrack.InsertTimeRange(range, assetVideoTrack, duration, out error);
videoTrack.PreferredTransform = assetVideoTrack.PreferredTransform;
if (microphone != null)
{
AVMutableCompositionTrack audioTrack = mixComposition.AddMutableTrack(AVMediaType.Audio, 0);
AVAssetTrack assetAudioTrack = asset.TracksWithMediaType(AVMediaType.Audio)[0];
audioTrack.InsertTimeRange(range, assetAudioTrack, duration, out error);
}
#endregion
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion
}
// 6
AVMutableVideoCompositionInstruction mainInstruction = AVMutableVideoCompositionInstruction.Create() as AVMutableVideoCompositionInstruction;
CMTimeRange rangeIns = new CMTimeRange()
{
Start = new CMTime(0, 0),
Duration = mixComposition.Duration
};
mainInstruction.TimeRange = rangeIns;
mainInstruction.LayerInstructions = Instruction_Array;
var mainComposition = AVMutableVideoComposition.Create();
mainComposition.Instructions = new AVVideoCompositionInstruction[1] { mainInstruction };
mainComposition.FrameDuration = new CMTime(1, 30);
mainComposition.RenderSize = new CGSize(mixComposition.NaturalSize.Height, mixComposition.NaturalSize.Width);
finalVideo_path = NSUrl.FromFilename(Path.Combine(Path.GetTempPath(), "Whole2.mov"));
if (File.Exists(Path.GetTempPath() + "Whole2.mov"))
{
File.Delete(Path.GetTempPath() + "Whole2.mov");
}
//... export video ...
AVAssetExportSession exportSession = new AVAssetExportSession(mixComposition, AVAssetExportSessionPreset.HighestQuality)
{
OutputUrl = NSUrl.FromFilename(Path.Combine(Path.GetTempPath(), "Whole2.mov")),
OutputFileType = AVFileType.QuickTimeMovie,
ShouldOptimizeForNetworkUse = true,
VideoComposition = mainComposition
};
exportSession.ExportAsynchronously(_OnExportDone);
FinalVideo = Path.Combine(Path.GetTempPath(), "Whole2.mov");
}
private AVMutableVideoCompositionLayerInstruction TestingInstruction(AVAsset asset, CMTime currentTime, AVAssetTrack mixComposition_video_Track)
{
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(mixComposition_video_Track);
var startTime = CMTime.Subtract(currentTime, asset.Duration);
//NaturalSize.Height is passed as a width parameter because IOS stores the video recording horizontally
CGAffineTransform translateToCenter = CGAffineTransform.MakeTranslation(mixComposition_video_Track.NaturalSize.Height, 0);
//Angle in radiants, not in degrees
CGAffineTransform rotate = CGAffineTransform.Rotate(translateToCenter, (nfloat)(Math.PI / 2));
instruction.SetTransform(rotate, (CMTime.Subtract(currentTime, asset.Duration)));
instruction.SetOpacity(1, startTime);
instruction.SetOpacity(0, currentTime);
return instruction;
}
}
Does anyone know how to solve this?
If you need more information I will provide it as soon as I see your request. Thank you all for your time, have a nice day. (:
I believe I figured out the problem in your code. You are only creating instructions on the first track. Look at these two lines here:
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
AVMutableComposition.tracksWithMediaType gets an array of tracks so, at the end of the first line, [0], grabs only the first track in the composition, which is the first video. As you loop through you are just creating instructions for the first video multiple times.
Your code and me not being familiar with Xamarin is confusing me, but I believe you can just do this and it should work:
Change these lines:
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion
To this:
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrack);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrack);
#endregion
All I did here was get rid of the videoTracksWithMediaType variable you made and used videoTrack instead. No need to fetch the corresponding track since you already created it and still have access to it within the code block you are in when creating instructions.
I am processing frames received from Kinect v2 (Color and IR) in UWP. The program runs on remote machine (XBOX One S). The main goal is to get frames and write them to the disk with 30 fps for Color and IR to later process them further.
I am using the following code to check the frame rate:
public MainPage()
{
this.InitialiseFrameReader(); // initialises MediaCapture for IR and Color
}
const int COLOR_SOURCE = 0;
const int IR_SOURCE = 1;
private async void InitialiseFrameReader()
{
await CleanupMediaCaptureAsync();
var allGroups = await MediaFrameSourceGroup.FindAllAsync();
if (allGroups.Count == 0)
{
return;
}
_groupSelectionIndex = (_groupSelectionIndex + 1) % allGroups.Count;
var selectedGroup = allGroups[_groupSelectionIndex];
var kinectGroup = selectedGroup;
try
{
await InitializeMediaCaptureAsync(kinectGroup);
}
catch (Exception exception)
{
_logger.Log($"MediaCapture initialization error: {exception.Message}");
await CleanupMediaCaptureAsync();
return;
}
// Set up frame readers, register event handlers and start streaming.
var startedKinds = new HashSet<MediaFrameSourceKind>();
foreach (MediaFrameSource source in _mediaCapture.FrameSources.Values.Where(x => x.Info.SourceKind == MediaFrameSourceKind.Color || x.Info.SourceKind == MediaFrameSourceKind.Infrared)) //
{
MediaFrameSourceKind kind = source.Info.SourceKind;
MediaFrameSource frameSource = null;
int frameindex = COLOR_SOURCE;
if (kind == MediaFrameSourceKind.Infrared)
{
frameindex = IR_SOURCE;
}
// Ignore this source if we already have a source of this kind.
if (startedKinds.Contains(kind))
{
continue;
}
MediaFrameSourceInfo frameInfo = kinectGroup.SourceInfos[frameindex];
if (_mediaCapture.FrameSources.TryGetValue(frameInfo.Id, out frameSource))
{
// Create a frameReader based on the source stream
MediaFrameReader frameReader = await _mediaCapture.CreateFrameReaderAsync(frameSource);
frameReader.FrameArrived += FrameReader_FrameArrived;
_sourceReaders.Add(frameReader);
MediaFrameReaderStartStatus status = await frameReader.StartAsync();
if (status == MediaFrameReaderStartStatus.Success)
{
startedKinds.Add(kind);
}
}
}
}
private async Task InitializeMediaCaptureAsync(MediaFrameSourceGroup sourceGroup)
{
if (_mediaCapture != null)
{
return;
}
// Initialize mediacapture with the source group.
_mediaCapture = new MediaCapture();
var settings = new MediaCaptureInitializationSettings
{
SourceGroup = sourceGroup,
SharingMode = MediaCaptureSharingMode.SharedReadOnly,
StreamingCaptureMode = StreamingCaptureMode.Video,
MemoryPreference = MediaCaptureMemoryPreference.Cpu
};
await _mediaCapture.InitializeAsync(settings);
}
private void FrameReader_FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
using (var frame = sender.TryAcquireLatestFrame())
{
if (frame != null)
{
//Settings.cameraframeQueue.Enqueue(null, frame.SourceKind.ToString(), frame.SystemRelativeTime.Value); //Add to Queue to process frame
Debug.WriteLine(frame.SourceKind.ToString() + " : " + frame.SystemRelativeTime.ToString());
}
}
}
I am trying to debug the application to check the frame rate so I have removed further processing.
I am not sure if I am not calculating it properly or something else is wrong.
For example, System Relative Time from 04:37:06 to 04:37:48 gives :
IR:
Fps(Occurrence)
31(1)
30(36)
29(18)
28(4)
Color:
Fps(Occurrence)
30(38)
29(18)
28(3)
I want this frame rate to be constant (30 fps) and aligned so IR and Color and same number of frames for that time.
This does not include any additional code. As soon as I have a process queue or any sort of code, the fps decreases and ranges from 15 to 30.
Can anyone please help me with this?
Thank you.
UPDATE:
After some testing and working around, it has come to my notice that PC produces 30fps but XBOX One (remote device) on debug mode produces very low fps. This does however improve when running it on release mode but the memory allocated for UWP apps is quite low.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
XBOX One has maximum available memory of 1 GB for Apps and 5 for Games.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
While in PC the fps is 30 (as the memory has no such restrictions).
This causes the frame rate to drop. However, the fps did improve when running it on release mode or published to MS Store.
this probably is a noob question, but I can't figure out how resolve it.
Im opening/resuming my android app from a notification (intent data).
the app is opening fine from the intent.
the problem is if send the app to the background again after opening from the intent and later I resume again, the coroutine executes again every time I resume, because keeps getting the data from the "old" intent.
*is there some way of clear the intent data? (without close the app)
im trying something like: AndroidJavaObject saveIntent = curActivity.Call<AndroidJavaObject>("setData");
to replace the intent data so the next itineration dont get te correct values
*Im thinking to limit to 1, the times what the coroutine can be executed, but not is a "clean"solution.
can someone give me some guidance?.
this is my code so far:
void OnApplicationPause(bool appPaused)
{
if (!isOnAndroid || Application.isEditor) { return; }
if (!appPaused)
{
//Returning to Application
Debug.Log("Application Resumed");
StartCoroutine(LoadSceneFromFCM());
}
else
{
//Leaving Application
Debug.Log("Application Paused");
}
}
IEnumerator LoadSceneFromFCM()
{
AndroidJavaClass UnityPlayer = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject curActivity = UnityPlayer.GetStatic<AndroidJavaObject>("currentActivity");
AndroidJavaObject curIntent = curActivity.Call<AndroidJavaObject>("getIntent");
string sceneToLoad = curIntent.Call<string>("getStringExtra", "sceneToOpen");
//string extraInfo = curIntent.Call<string>("getStringExtra", "extraInfo"); use this for do some extra stuff
Scene curScene = SceneManager.GetActiveScene();
if (!string.IsNullOrEmpty(sceneToLoad) && sceneToLoad != curScene.name)
{
// If the current scene is different than the intended scene to load,
// load the intended scene. This is to avoid reloading an already acive
// scene.
Debug.Log("Loading Scene: " + sceneToLoad);
Handheld.SetActivityIndicatorStyle(AndroidActivityIndicatorStyle.Large);
Handheld.StartActivityIndicator();
yield return new WaitForSeconds(0f);
SceneManager.LoadScene(sceneToLoad);
}
}
this is working for me now:
void Awake()
{
DontDestroyOnLoad(gameObject);
}
void OnApplicationPause(bool appPaused)
{
AndroidJavaClass unityPlayer = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject currentActivity = unityPlayer.GetStatic<AndroidJavaObject>("currentActivity");
AndroidJavaObject intent = currentActivity.Call<AndroidJavaObject>("getIntent");
Scene curScene = SceneManager.GetActiveScene();
string sceneToLoad = intent.Call<string>("getStringExtra", "sceneToOpen");
if (sceneToLoad != null && sceneToLoad.Trim().Length > 0 && sceneToLoad != curScene.name)
{
Debug.Log("Load the Video Scene");
SceneManager.LoadScene(sceneToLoad);
//redirectToAppropriateScreen(intent, onClickScreen);
}
intent.Call("removeExtra", "sceneToOpen"); //this remove the data from the intent, so the function cant be completed after resume again.
Debug.Log("Extra data removed");
}
IsPointerOverGameObject always returns false for touch.
I have tried all solutions that I could find.
It works perfectly in Editor - clicks are blocked from falling through UI, but no a mobile this method always returns false.
Here is my code:
private static bool IsPointerOverGameObject()
{
bool isPointerOverGameObject = EventSystem.current.IsPointerOverGameObject();
for (int i = 0; i < Input.touchCount; i++)
{
Touch touch = Input.touches[i];
if (touch.phase != TouchPhase.Canceled && touch.phase != TouchPhase.Ended)
{
if (EventSystem.current.IsPointerOverGameObject(Input.touches[i].fingerId))
{
isPointerOverGameObject = true;
break;
}
}
}
return isPointerOverGameObject;
}
public void OnMouseDown()
{
if (IsPointerOverGameObject())
{
return;
}
// code
}
This fix works from me:
private bool IsPointerOverUIObject() {
PointerEventData eventDataCurrentPosition = new PointerEventData(EventSystem.current);
eventDataCurrentPosition.position = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
List<RaycastResult> results = new List<RaycastResult>();
EventSystem.current.RaycastAll(eventDataCurrentPosition, results);
return results.Count > 0;
}
Got it from:
http://answers.unity.com/answers/1115473/view.html
According to the unity's help forum here.
The EventSystem.current.IsPointerOverGameObject() requires the touch id in the parameter to be passed. Try to pass Input.touches[i].fingerId as the parameter for it work in mobile devices and for the editor you need to leave it empty.
Try this EventSystem.current.IsPointerOverGameObject(Input.touches[i].fingerId)
edit: My bad, I didn't see that you already were passing the touch id in the code I saw the first line and thought that was missing.
I'm developing a WPF application to scan different documents with a scanner. The size of the documents won't be the same, can be variable.
I have my code working without scanner dialogs, and I would like the user not to have to preview the image and then scanning it to get the real size (resulting in two scans).
The problem is that I try to set page-size to auto before scanning
SetWIAProperty(item.Properties, "3097", 100);
but I get HRESULT: 0x80210067 System.Runtime.InteropServices.COMException.
I've googled to this and seens that my scanner is not supporting this property.
So, is there any way of achieving this? I need the resulting scanned image to be only the document, not all the scanner area (which I'm obtaning right now).
In case I couldn't tell the scanner to scan only the document, I've thought also in cropping the resulting image to obtain only the document I need, but don't know how to do this right now.
Here is my code:
DeviceManager deviceManager = new DeviceManager();
Device scanner = null;
foreach (DeviceInfo deviceInfo in deviceManager.DeviceInfos)
{
if (deviceInfo.DeviceID == scannerId)
{
scanner = deviceInfo.Connect();
break;
}
}
if (scanner == null)
{
throw new Exception("Scanner not found");
}
Item item = scanner.Items[1] as Item;
int dpi = 300;
SetWIAProperty(item.Properties, "6146", 1); // 1 Color
SetWIAProperty(item.Properties, "6147", dpi); // dpis
SetWIAProperty(item.Properties, "6148", dpi); // dpis
// This line throws the exception
//SetWIAProperty(item.Properties, "3097", 100); // page size 0=A4, 1=letter, 2=custom, 100=auto
try
{
ICommonDialog wiaCommonDialog = new CommonDialog();
ImageFile scannedImage = (ImageFile)wiaCommonDialog.ShowTransfer(item, FormatID.wiaFormatPNG, false);
if (scannedImage != null)
{
ImageProcess imgProcess = new ImageProcess();
object convertFilter = "Convert";
string convertFilterID = imgProcess.FilterInfos.get_Item(ref convertFilter).FilterID;
imgProcess.Filters.Add(convertFilterID, 0);
SetWIAProperty(imgProcess.Filters[imgProcess.Filters.Count].Properties, "FormatID", FormatID.wiaFormatPNG);
scannedImage = imgProcess.Apply(scannedImage);
if (System.IO.File.Exists(#"D:\temp\scanwia3.png"))
System.IO.File.Delete(#"D:\temp\scanwia3.png");
scannedImage.SaveFile(#"D:\temp\scanwia3.png");
}
scannedImage = null;
}
finally
{
item = null;
scanner = null;
}
And SetWIAProperty function:
private static void SetWIAProperty(IProperties properties, object propName, object propValue)
{
Property prop = properties.get_Item(ref propName);
prop.set_Value(ref propValue);
}
Any help would be appreciated.
Kind regards,
Jose.
Property Page Size belongs to device, not to item.
var WIA_IPS_PAGE_SIZE = "3097";
var WIA_PAGE_AUTO = 100;
SetWIAProperty(scanner.Properties, WIA_IPS_PAGE_SIZE, WIA_PAGE_AUTO);