How to Flip or increase VideoRotation - c#

I have a problem with SetRecordRotation in UWP, I was wondering is there a possibility to increase a setRecordingRotation(VideoRotation.Clockwise180Degrees) to setRecordingRotation(VideoRotation.Clockwise360Degrees) in case not only to double it or to flip video or mirroring like in Skype.
I am creating an app that the need to record with a preview like in SKYPE, below it's my code any suggestion
private async Task InitializeCameraAsync()
{
Debug.WriteLine("InitializeCameraAsync");
if (mc == null)
{
// Attempt to get the back camera if one is available, but use any camera device if not
var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
//StorageFile sampleFile = await localFolder.GetFileAsync("proporties.txt");
//String timestamp = await FileIO.ReadTextAsync(sampleFile);
var cameraDevice = localSettings.Values["camValue"].ToString();
if (allVideoDevices == null)
{
Debug.WriteLine("No camera device found!");
return;
}
// Create MediaCapture and its settings
mc = new MediaCapture();
var settings = new MediaCaptureInitializationSettings { VideoDeviceId = allVideoDevices[int.Parse(cameraDevice)].Id };
await mc.InitializeAsync(settings);
//CaptureElement.RenderTransform = new ScaleTransform { ScaleX = -1 };
//_isInitialized = true;
SetResolution();
DisplayInformation displayInfo = DisplayInformation.GetForCurrentView();
displayInfo.OrientationChanged += DisplayInfo_OrientationChanged;
DisplayInfo_OrientationChanged(displayInfo, null);
stream = new InMemoryRandomAccessStream();
llmr = await mc.PrepareLowLagRecordToStreamAsync(MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto), stream);
//mc.SetPreviewRotation(VideoRotation.Clockwise180Degrees);
// mc.SetRecordRotation(rotationAngle);
//CaptureElement.RenderTransform = new ScaleTransform()
//{
//ScaleX = 1
//};
//mc.SetPreviewMirroring(_mirroringPreview);
//SetPreviewRotationAsync();
-> i want it to be VideoRotation.Clockwise360Degrees not instead Clockwise180Degrees is there any way to increas
mc.SetRecordRotation(VideoRotation.Clockwise180Degrees);
await llmr.StartAsync();
await llmr.StopAsync();
CaptureElement.Source = mc;
CaptureElement.FlowDirection = _mirroringPreview ? FlowDirection.LeftToRight : FlowDirection.RightToLeft;
CaptureStack.Visibility = Visibility.Visible;
//if (localSettings.Values.ContainsKey("camValue") == false)
//{
//CameraErrorTextBlock.Visibility = Visibility.Visible;
//}
RecordProgress.Visibility = Visibility.Visible;
CaptureGrid.Visibility = Visibility.Visible;
CancelButton.HorizontalAlignment = HorizontalAlignment.Right;
//CaptureElement.FlowDirection = FlowDirection.LeftToRight;
//Prepare low lag recording
stream = new InMemoryRandomAccessStream();
//var encodingProperties = (CaptureElement.Tag as StreamResolution).EncodingProperties;
var encodingProfile= MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto);
// Calculate rotation angle, taking mirroring into account if necessary
//var rotationAngle = VideoRotation.Clockwise180Degrees + VideoRotation.Clockwise180Degrees;
//mc.SetRecordRotation(rotationAngle);
//var rotationAngle = 360 - ConvertDeviceOrientationToDegrees(GetCameraOrientation());
//encodingProfile.Video.Properties.Add(RotationKey, mc.SetRecordRotation(rotationAngle));
llmr = await mc.PrepareLowLagRecordToStreamAsync(encodingProfile, stream);
await mc.StartPreviewAsync();
}
else if (mc != null)
{
//if (localSettings.Values.ContainsKey("camValue") == true)
//{
CameraErrorTextBlock.Visibility = Visibility.Visible;
//}
}
}

Two things that I would like to call out:
Rotating anything 360 degrees is the same as rotating it 0 degrees, which means it will remain unchanged. What you want is to flip it horizontally, to mirror it.
Apps like Skype only do this for the user-side preview, not for the stream transmitted to the other endpoint, which remains unchanged. The reason for this is that if the user holds up something like text, the receiver should be able to see it the way it is. Reading mirrored text is a lot harder.
So, even though I said you should do mirroring instead of rotating 360 degrees, in reality you shouldn't do anything at all to the video capture stream in order to provide the best experience.
Finally, to mirror the preview, the easiest way is to use the FlowDirection property of the CaptureElement (for C# or C++), or alternatively use a transform of x:-1 y:1 on the style of the video element (for JS):
cameraPreview.style.transform = "scale(-1, 1)";
or a RenderTransform (for C# or C++). For a reference on how to mirror the preview, you can check out the CameraStarterKit sample on github, which covers C#, C++, JS and VB.

Related

AVMutableComposition output freezes at the last frame of the first video

I am trying to merge multiple clips(videos) into one using AVMutableCompositions, I have successfully done this as well as rotating and translating each instruction, however, there is still one issue that remains.
When the first clip finishes the output freezes at its last frame (the last frame of the first clip); this only happens if there is another clip visible, so, for example, if I were to set the opacity of the second and third clips to 0 at CMTime.Zero and the first one to 0 at firstClip.Duration, the result would be a video that displays the first clip's video, and once this finishes it displays a black background.
The clips' audio works perfectly.
Here is my code:
public void TESTING()
{
//microphone
AVCaptureDevice microphone = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Audio);
AVMutableComposition mixComposition = AVMutableComposition.Create();
AVVideoCompositionLayerInstruction[] Instruction_Array = new AVVideoCompositionLayerInstruction[Clips.Count];
foreach (string clip in Clips)
{
var asset = AVUrlAsset.FromUrl(new NSUrl(clip, false)) as AVUrlAsset;
#region HoldVideoTrack
//This range applies to the video, not to the mixcomposition
CMTimeRange range = new CMTimeRange()
{
Start = CMTime.Zero,
Duration = asset.Duration
};
var duration = mixComposition.Duration;
NSError error;
AVMutableCompositionTrack videoTrack = mixComposition.AddMutableTrack(AVMediaType.Video, 0);
AVAssetTrack assetVideoTrack = asset.TracksWithMediaType(AVMediaType.Video)[0];
videoTrack.InsertTimeRange(range, assetVideoTrack, duration, out error);
videoTrack.PreferredTransform = assetVideoTrack.PreferredTransform;
if (microphone != null)
{
AVMutableCompositionTrack audioTrack = mixComposition.AddMutableTrack(AVMediaType.Audio, 0);
AVAssetTrack assetAudioTrack = asset.TracksWithMediaType(AVMediaType.Audio)[0];
audioTrack.InsertTimeRange(range, assetAudioTrack, duration, out error);
}
#endregion
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion
}
// 6
AVMutableVideoCompositionInstruction mainInstruction = AVMutableVideoCompositionInstruction.Create() as AVMutableVideoCompositionInstruction;
CMTimeRange rangeIns = new CMTimeRange()
{
Start = new CMTime(0, 0),
Duration = mixComposition.Duration
};
mainInstruction.TimeRange = rangeIns;
mainInstruction.LayerInstructions = Instruction_Array;
var mainComposition = AVMutableVideoComposition.Create();
mainComposition.Instructions = new AVVideoCompositionInstruction[1] { mainInstruction };
mainComposition.FrameDuration = new CMTime(1, 30);
mainComposition.RenderSize = new CGSize(mixComposition.NaturalSize.Height, mixComposition.NaturalSize.Width);
finalVideo_path = NSUrl.FromFilename(Path.Combine(Path.GetTempPath(), "Whole2.mov"));
if (File.Exists(Path.GetTempPath() + "Whole2.mov"))
{
File.Delete(Path.GetTempPath() + "Whole2.mov");
}
//... export video ...
AVAssetExportSession exportSession = new AVAssetExportSession(mixComposition, AVAssetExportSessionPreset.HighestQuality)
{
OutputUrl = NSUrl.FromFilename(Path.Combine(Path.GetTempPath(), "Whole2.mov")),
OutputFileType = AVFileType.QuickTimeMovie,
ShouldOptimizeForNetworkUse = true,
VideoComposition = mainComposition
};
exportSession.ExportAsynchronously(_OnExportDone);
FinalVideo = Path.Combine(Path.GetTempPath(), "Whole2.mov");
}
private AVMutableVideoCompositionLayerInstruction TestingInstruction(AVAsset asset, CMTime currentTime, AVAssetTrack mixComposition_video_Track)
{
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(mixComposition_video_Track);
var startTime = CMTime.Subtract(currentTime, asset.Duration);
//NaturalSize.Height is passed as a width parameter because IOS stores the video recording horizontally
CGAffineTransform translateToCenter = CGAffineTransform.MakeTranslation(mixComposition_video_Track.NaturalSize.Height, 0);
//Angle in radiants, not in degrees
CGAffineTransform rotate = CGAffineTransform.Rotate(translateToCenter, (nfloat)(Math.PI / 2));
instruction.SetTransform(rotate, (CMTime.Subtract(currentTime, asset.Duration)));
instruction.SetOpacity(1, startTime);
instruction.SetOpacity(0, currentTime);
return instruction;
}
}
Does anyone know how to solve this?
If you need more information I will provide it as soon as I see your request. Thank you all for your time, have a nice day. (:
I believe I figured out the problem in your code. You are only creating instructions on the first track. Look at these two lines here:
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
AVMutableComposition.tracksWithMediaType gets an array of tracks so, at the end of the first line, [0], grabs only the first track in the composition, which is the first video. As you loop through you are just creating instructions for the first video multiple times.
Your code and me not being familiar with Xamarin is confusing me, but I believe you can just do this and it should work:
Change these lines:
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion
To this:
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrack);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrack);
#endregion
All I did here was get rid of the videoTracksWithMediaType variable you made and used videoTrack instead. No need to fetch the corresponding track since you already created it and still have access to it within the code block you are in when creating instructions.

UWP Kinect V2 keep frame rate constant (30fps)

I am processing frames received from Kinect v2 (Color and IR) in UWP. The program runs on remote machine (XBOX One S). The main goal is to get frames and write them to the disk with 30 fps for Color and IR to later process them further.
I am using the following code to check the frame rate:
public MainPage()
{
this.InitialiseFrameReader(); // initialises MediaCapture for IR and Color
}
const int COLOR_SOURCE = 0;
const int IR_SOURCE = 1;
private async void InitialiseFrameReader()
{
await CleanupMediaCaptureAsync();
var allGroups = await MediaFrameSourceGroup.FindAllAsync();
if (allGroups.Count == 0)
{
return;
}
_groupSelectionIndex = (_groupSelectionIndex + 1) % allGroups.Count;
var selectedGroup = allGroups[_groupSelectionIndex];
var kinectGroup = selectedGroup;
try
{
await InitializeMediaCaptureAsync(kinectGroup);
}
catch (Exception exception)
{
_logger.Log($"MediaCapture initialization error: {exception.Message}");
await CleanupMediaCaptureAsync();
return;
}
// Set up frame readers, register event handlers and start streaming.
var startedKinds = new HashSet<MediaFrameSourceKind>();
foreach (MediaFrameSource source in _mediaCapture.FrameSources.Values.Where(x => x.Info.SourceKind == MediaFrameSourceKind.Color || x.Info.SourceKind == MediaFrameSourceKind.Infrared)) //
{
MediaFrameSourceKind kind = source.Info.SourceKind;
MediaFrameSource frameSource = null;
int frameindex = COLOR_SOURCE;
if (kind == MediaFrameSourceKind.Infrared)
{
frameindex = IR_SOURCE;
}
// Ignore this source if we already have a source of this kind.
if (startedKinds.Contains(kind))
{
continue;
}
MediaFrameSourceInfo frameInfo = kinectGroup.SourceInfos[frameindex];
if (_mediaCapture.FrameSources.TryGetValue(frameInfo.Id, out frameSource))
{
// Create a frameReader based on the source stream
MediaFrameReader frameReader = await _mediaCapture.CreateFrameReaderAsync(frameSource);
frameReader.FrameArrived += FrameReader_FrameArrived;
_sourceReaders.Add(frameReader);
MediaFrameReaderStartStatus status = await frameReader.StartAsync();
if (status == MediaFrameReaderStartStatus.Success)
{
startedKinds.Add(kind);
}
}
}
}
private async Task InitializeMediaCaptureAsync(MediaFrameSourceGroup sourceGroup)
{
if (_mediaCapture != null)
{
return;
}
// Initialize mediacapture with the source group.
_mediaCapture = new MediaCapture();
var settings = new MediaCaptureInitializationSettings
{
SourceGroup = sourceGroup,
SharingMode = MediaCaptureSharingMode.SharedReadOnly,
StreamingCaptureMode = StreamingCaptureMode.Video,
MemoryPreference = MediaCaptureMemoryPreference.Cpu
};
await _mediaCapture.InitializeAsync(settings);
}
private void FrameReader_FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
using (var frame = sender.TryAcquireLatestFrame())
{
if (frame != null)
{
//Settings.cameraframeQueue.Enqueue(null, frame.SourceKind.ToString(), frame.SystemRelativeTime.Value); //Add to Queue to process frame
Debug.WriteLine(frame.SourceKind.ToString() + " : " + frame.SystemRelativeTime.ToString());
}
}
}
I am trying to debug the application to check the frame rate so I have removed further processing.
I am not sure if I am not calculating it properly or something else is wrong.
For example, System Relative Time from 04:37:06 to 04:37:48 gives :
IR:
Fps(Occurrence)
31(1)
30(36)
29(18)
28(4)
Color:
Fps(Occurrence)
30(38)
29(18)
28(3)
I want this frame rate to be constant (30 fps) and aligned so IR and Color and same number of frames for that time.
This does not include any additional code. As soon as I have a process queue or any sort of code, the fps decreases and ranges from 15 to 30.
Can anyone please help me with this?
Thank you.
UPDATE:
After some testing and working around, it has come to my notice that PC produces 30fps but XBOX One (remote device) on debug mode produces very low fps. This does however improve when running it on release mode but the memory allocated for UWP apps is quite low.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
XBOX One has maximum available memory of 1 GB for Apps and 5 for Games.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
While in PC the fps is 30 (as the memory has no such restrictions).
This causes the frame rate to drop. However, the fps did improve when running it on release mode or published to MS Store.

Take several photos with PhotoCapture Unity C#

I got a script that should be able to take more than one photo. I am using PhotoCapture and get an error making it unable to capture a second photo. I get the error "Value cannot be null" on the photoCaptureObject.StartPhotoModeAsync(cameraParameters, result => row, but I don't understand why this is.
I have commented away the photoCaptureObject = null; row so that the photoCaptureObject should not be null. The row if (photoCaptureObject == null) return; also proofs that the photoCaptureObjectis not null.
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
public string path = "";
CameraParameters cameraParameters = new CameraParameters();
private void Awake()
{
var cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
// Create a PhotoCapture object
PhotoCapture.CreateAsync(false, captureObject =>
{
photoCaptureObject = captureObject;
cameraParameters.hologramOpacity = 0.0f;
cameraParameters.cameraResolutionWidth = cameraResolution.width;
cameraParameters.cameraResolutionHeight = cameraResolution.height;
cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
});
}
private void Update()
{
// if not initialized yet don't take input
if (photoCaptureObject == null) return;
if (Input.GetKeyDown(KeyCode.K) || Input.GetKeyDown("k"))
{
Debug.Log("k was pressed");
VuforiaBehaviour.Instance.gameObject.SetActive(false);
// Activate the camera
photoCaptureObject.StartPhotoModeAsync(cameraParameters, result =>
{
if (result.success)
{
// Take a picture
photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
}
else
{
Debug.LogError("Couldn't start photo mode!", this);
}
});
}
}
There is some code here in between that change the photo taken and so on, but I don't think that that code is part of the problem.
private void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
{
// Shutdown the photo capture resource
VuforiaBehaviour.Instance.gameObject.SetActive(true);
photoCaptureObject.Dispose();
//photoCaptureObject = null;
Debug.Log("Photomode stopped");
}
So what else could be null? Is it the StartPhotoModeAsync somehow? How can I fix this issue?
Thanks!
Okay so I understand now thanks to Henriks comment.
Unity specifically says this about StartPhotoModeAsync:
Only one PhotoCapture instance can start the photo mode at any given
time
I have focused more on the sentence after saying that one should always use PhotoCapture.StopPhotoModeAsyncbecause having PhotoCaptureMode on takes more power so I never thought about that the instance wouldn't start again after stopping it.
Now I only have TakePhotoAsyncin the update on key press and do not stop the PhotoMode due to that the app that I am making should always be able to capture photos.

WIA: setting dinamic page size

I'm developing a WPF application to scan different documents with a scanner. The size of the documents won't be the same, can be variable.
I have my code working without scanner dialogs, and I would like the user not to have to preview the image and then scanning it to get the real size (resulting in two scans).
The problem is that I try to set page-size to auto before scanning
SetWIAProperty(item.Properties, "3097", 100);
but I get HRESULT: 0x80210067 System.Runtime.InteropServices.COMException.
I've googled to this and seens that my scanner is not supporting this property.
So, is there any way of achieving this? I need the resulting scanned image to be only the document, not all the scanner area (which I'm obtaning right now).
In case I couldn't tell the scanner to scan only the document, I've thought also in cropping the resulting image to obtain only the document I need, but don't know how to do this right now.
Here is my code:
DeviceManager deviceManager = new DeviceManager();
Device scanner = null;
foreach (DeviceInfo deviceInfo in deviceManager.DeviceInfos)
{
if (deviceInfo.DeviceID == scannerId)
{
scanner = deviceInfo.Connect();
break;
}
}
if (scanner == null)
{
throw new Exception("Scanner not found");
}
Item item = scanner.Items[1] as Item;
int dpi = 300;
SetWIAProperty(item.Properties, "6146", 1); // 1 Color
SetWIAProperty(item.Properties, "6147", dpi); // dpis
SetWIAProperty(item.Properties, "6148", dpi); // dpis
// This line throws the exception
//SetWIAProperty(item.Properties, "3097", 100); // page size 0=A4, 1=letter, 2=custom, 100=auto
try
{
ICommonDialog wiaCommonDialog = new CommonDialog();
ImageFile scannedImage = (ImageFile)wiaCommonDialog.ShowTransfer(item, FormatID.wiaFormatPNG, false);
if (scannedImage != null)
{
ImageProcess imgProcess = new ImageProcess();
object convertFilter = "Convert";
string convertFilterID = imgProcess.FilterInfos.get_Item(ref convertFilter).FilterID;
imgProcess.Filters.Add(convertFilterID, 0);
SetWIAProperty(imgProcess.Filters[imgProcess.Filters.Count].Properties, "FormatID", FormatID.wiaFormatPNG);
scannedImage = imgProcess.Apply(scannedImage);
if (System.IO.File.Exists(#"D:\temp\scanwia3.png"))
System.IO.File.Delete(#"D:\temp\scanwia3.png");
scannedImage.SaveFile(#"D:\temp\scanwia3.png");
}
scannedImage = null;
}
finally
{
item = null;
scanner = null;
}
And SetWIAProperty function:
private static void SetWIAProperty(IProperties properties, object propName, object propValue)
{
Property prop = properties.get_Item(ref propName);
prop.set_Value(ref propValue);
}
Any help would be appreciated.
Kind regards,
Jose.
Property Page Size belongs to device, not to item.
var WIA_IPS_PAGE_SIZE = "3097";
var WIA_PAGE_AUTO = 100;
SetWIAProperty(scanner.Properties, WIA_IPS_PAGE_SIZE, WIA_PAGE_AUTO);

Captured photo has vertical stripes

I'm capturing a photo using the WinRT Mediacapture class, but when I take the picture it gets weird transparent stripes, it's kind of hard to explain so here's some pictures:
Before capturing picture (Previewing)
After taking picture
I have seen other people with kind of the same problem around here (like here, but the solutions to them didn't seem to work for me. (either no result, or the photo got messed up)
Code I use for setting the resolution:
System.Collections.Generic.IEnumerable<VideoEncodingProperties> available_resolutions = captureManager.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.Photo).Select(x => x as VideoEncodingProperties);
foreach (VideoEncodingProperties resolution in available_resolutions)
{
if (resolution != null && resolution.Width == 640 && resolution.Height == 480) //(resolution.Width==1920 && resolution.Height==1080) //resolution.Width==640 && resolution.Height==480)
{
await captureManager.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.Photo, resolution);
}
}
Code I'm using for taking the photo:
private async Task<BitmapImage> ByteArrayToBitmapImage(byte[] byteArray)
{
var bitmapImage = new BitmapImage();
using (var stream = new InMemoryRandomAccessStream())
{
await stream.WriteAsync(byteArray.AsBuffer());
stream.Seek(0);
await bitmapImage.SetSourceAsync(stream);
await stream.FlushAsync();
}
return bitmapImage;
}
/// <summary>
/// Relayed Execute method for TakePictureCommand.
/// </summary>
async void ExecuteTakePicture()
{
System.Diagnostics.Debug.WriteLine("Started making picture");
DateTime starttime = DateTime.Now;
ImageEncodingProperties format = ImageEncodingProperties.CreateJpeg();
using (var imageStream = new InMemoryRandomAccessStream())
{
await captureManager.CapturePhotoToStreamAsync(format, imageStream);
//Compresses the image if it exceedes the maximum file size
imageStream.Seek(0);
//Resize the image if needed
uint maxImageWidth = 640;
uint maxImageHeight = 480;
if (AvatarPhoto)
{
maxImageHeight = 200;
maxImageWidth = 200;
//Create a BitmapDecoder from the stream
BitmapDecoder resizeDecoder = await BitmapDecoder.CreateAsync(imageStream);
if (resizeDecoder.PixelWidth > maxImageWidth || resizeDecoder.PixelHeight > maxImageHeight)
{
//Resize the image if it exceedes the maximum width or height
WriteableBitmap tempBitmap = new WriteableBitmap((int)resizeDecoder.PixelWidth, (int)resizeDecoder.PixelHeight);
imageStream.Seek(0);
await tempBitmap.SetSourceAsync(imageStream);
WriteableBitmap resizedImage = tempBitmap.Resize((int)maxImageWidth, (int)maxImageHeight, WriteableBitmapExtensions.Interpolation.Bilinear);
tempBitmap = null;
//Assign to imageStream the resized WriteableBitmap
await resizedImage.ToStream(imageStream, BitmapEncoder.JpegEncoderId);
resizedImage = null;
}
//Converts the final image into a Base64 String
imageStream.Seek(0);
}
//Converts the final image into a Base64 String
imageStream.Seek(0);
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(imageStream);
PixelDataProvider pixels = await decoder.GetPixelDataAsync();
byte[] bytes = pixels.DetachPixelData();
//Encode image
InMemoryRandomAccessStream encoded = new InMemoryRandomAccessStream();
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, encoded);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Ignore, maxImageWidth, maxImageHeight, decoder.DpiX, decoder.DpiY, bytes);
//Rotate the image based on the orientation of the camera
if (currentOrientation == DisplayOrientations.Portrait)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise90Degrees;
}
else if (currentOrientation == DisplayOrientations.LandscapeFlipped)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise180Degrees;
}
if (FrontCam)
{
if (currentOrientation == DisplayOrientations.Portrait)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise270Degrees;
}
else if (currentOrientation == DisplayOrientations.LandscapeFlipped)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise180Degrees;
}
}
await encoder.FlushAsync();
encoder = null;
//Read bytes
byte[] outBytes = new byte[encoded.Size];
await encoded.AsStream().ReadAsync(outBytes, 0, outBytes.Length);
encoded.Dispose();
encoded = null;
//Create Base64
image = await ByteArrayToBitmapImage(outBytes);
System.Diagnostics.Debug.WriteLine("Pixel width: " + image.PixelWidth + " height: " + image.PixelHeight);
base64 = Convert.ToBase64String(outBytes);
Array.Clear(outBytes, 0, outBytes.Length);
await imageStream.FlushAsync();
imageStream.Dispose();
}
DateTime endtime = DateTime.Now;
TimeSpan span = (endtime - starttime);
//Kind of a hacky way to prevent high RAM usage and even crashing, remove when overal RAM usage has been lowered
GC.Collect();
System.Diagnostics.Debug.WriteLine("Making the picture took: " + span.Seconds + " seconds");
if (image != null)
{
RaisePropertyChanged("CapturedImage");
//Tell both UsePictureCommand and ResetCommand that the situation has changed.
((RelayedCommand)UsePictureCommand).RaiseCanExecuteChanged();
((RelayedCommand)ResetCommand).RaiseCanExecuteChanged();
}
else
{
throw new InvalidOperationException("Imagestream is not valid");
}
}
If there is any more information needed feel free to comment, I will try to put it out as fast as possible, thanks for reading.
The aspect ratio of the preview has to match the aspect ratio of the captured photo, or you'll get artifacts like that one in your capture (although it actually depends on the driver implementation, so it may vary from device to device).
Use the MediaCapture.VideoDeviceController.GetMediaStreamProperties() method on the MediaStream.VideoPreview.
Get the resulting VideoEncodingProperties, and use the Width and Height to figure out the aspect ratio.
Call MediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties() on the MediaStream.Photo, and find out which ones have an aspect ratio that matches (I recommend using a tolerance value, something like 0.015f might be good)
Out of the ones that do, choose which one better suits your needs, e.g. by paying attention to the total resolution W * H
Apply your selection by calling MediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync() on the MediaStream.Photo and passing the encoding properties you want.
More information in this thread. An SDK sample is available here.

Categories