I'm capturing a photo using the WinRT Mediacapture class, but when I take the picture it gets weird transparent stripes, it's kind of hard to explain so here's some pictures:
Before capturing picture (Previewing)
After taking picture
I have seen other people with kind of the same problem around here (like here, but the solutions to them didn't seem to work for me. (either no result, or the photo got messed up)
Code I use for setting the resolution:
System.Collections.Generic.IEnumerable<VideoEncodingProperties> available_resolutions = captureManager.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.Photo).Select(x => x as VideoEncodingProperties);
foreach (VideoEncodingProperties resolution in available_resolutions)
{
if (resolution != null && resolution.Width == 640 && resolution.Height == 480) //(resolution.Width==1920 && resolution.Height==1080) //resolution.Width==640 && resolution.Height==480)
{
await captureManager.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.Photo, resolution);
}
}
Code I'm using for taking the photo:
private async Task<BitmapImage> ByteArrayToBitmapImage(byte[] byteArray)
{
var bitmapImage = new BitmapImage();
using (var stream = new InMemoryRandomAccessStream())
{
await stream.WriteAsync(byteArray.AsBuffer());
stream.Seek(0);
await bitmapImage.SetSourceAsync(stream);
await stream.FlushAsync();
}
return bitmapImage;
}
/// <summary>
/// Relayed Execute method for TakePictureCommand.
/// </summary>
async void ExecuteTakePicture()
{
System.Diagnostics.Debug.WriteLine("Started making picture");
DateTime starttime = DateTime.Now;
ImageEncodingProperties format = ImageEncodingProperties.CreateJpeg();
using (var imageStream = new InMemoryRandomAccessStream())
{
await captureManager.CapturePhotoToStreamAsync(format, imageStream);
//Compresses the image if it exceedes the maximum file size
imageStream.Seek(0);
//Resize the image if needed
uint maxImageWidth = 640;
uint maxImageHeight = 480;
if (AvatarPhoto)
{
maxImageHeight = 200;
maxImageWidth = 200;
//Create a BitmapDecoder from the stream
BitmapDecoder resizeDecoder = await BitmapDecoder.CreateAsync(imageStream);
if (resizeDecoder.PixelWidth > maxImageWidth || resizeDecoder.PixelHeight > maxImageHeight)
{
//Resize the image if it exceedes the maximum width or height
WriteableBitmap tempBitmap = new WriteableBitmap((int)resizeDecoder.PixelWidth, (int)resizeDecoder.PixelHeight);
imageStream.Seek(0);
await tempBitmap.SetSourceAsync(imageStream);
WriteableBitmap resizedImage = tempBitmap.Resize((int)maxImageWidth, (int)maxImageHeight, WriteableBitmapExtensions.Interpolation.Bilinear);
tempBitmap = null;
//Assign to imageStream the resized WriteableBitmap
await resizedImage.ToStream(imageStream, BitmapEncoder.JpegEncoderId);
resizedImage = null;
}
//Converts the final image into a Base64 String
imageStream.Seek(0);
}
//Converts the final image into a Base64 String
imageStream.Seek(0);
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(imageStream);
PixelDataProvider pixels = await decoder.GetPixelDataAsync();
byte[] bytes = pixels.DetachPixelData();
//Encode image
InMemoryRandomAccessStream encoded = new InMemoryRandomAccessStream();
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, encoded);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Ignore, maxImageWidth, maxImageHeight, decoder.DpiX, decoder.DpiY, bytes);
//Rotate the image based on the orientation of the camera
if (currentOrientation == DisplayOrientations.Portrait)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise90Degrees;
}
else if (currentOrientation == DisplayOrientations.LandscapeFlipped)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise180Degrees;
}
if (FrontCam)
{
if (currentOrientation == DisplayOrientations.Portrait)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise270Degrees;
}
else if (currentOrientation == DisplayOrientations.LandscapeFlipped)
{
encoder.BitmapTransform.Rotation = BitmapRotation.Clockwise180Degrees;
}
}
await encoder.FlushAsync();
encoder = null;
//Read bytes
byte[] outBytes = new byte[encoded.Size];
await encoded.AsStream().ReadAsync(outBytes, 0, outBytes.Length);
encoded.Dispose();
encoded = null;
//Create Base64
image = await ByteArrayToBitmapImage(outBytes);
System.Diagnostics.Debug.WriteLine("Pixel width: " + image.PixelWidth + " height: " + image.PixelHeight);
base64 = Convert.ToBase64String(outBytes);
Array.Clear(outBytes, 0, outBytes.Length);
await imageStream.FlushAsync();
imageStream.Dispose();
}
DateTime endtime = DateTime.Now;
TimeSpan span = (endtime - starttime);
//Kind of a hacky way to prevent high RAM usage and even crashing, remove when overal RAM usage has been lowered
GC.Collect();
System.Diagnostics.Debug.WriteLine("Making the picture took: " + span.Seconds + " seconds");
if (image != null)
{
RaisePropertyChanged("CapturedImage");
//Tell both UsePictureCommand and ResetCommand that the situation has changed.
((RelayedCommand)UsePictureCommand).RaiseCanExecuteChanged();
((RelayedCommand)ResetCommand).RaiseCanExecuteChanged();
}
else
{
throw new InvalidOperationException("Imagestream is not valid");
}
}
If there is any more information needed feel free to comment, I will try to put it out as fast as possible, thanks for reading.
The aspect ratio of the preview has to match the aspect ratio of the captured photo, or you'll get artifacts like that one in your capture (although it actually depends on the driver implementation, so it may vary from device to device).
Use the MediaCapture.VideoDeviceController.GetMediaStreamProperties() method on the MediaStream.VideoPreview.
Get the resulting VideoEncodingProperties, and use the Width and Height to figure out the aspect ratio.
Call MediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties() on the MediaStream.Photo, and find out which ones have an aspect ratio that matches (I recommend using a tolerance value, something like 0.015f might be good)
Out of the ones that do, choose which one better suits your needs, e.g. by paying attention to the total resolution W * H
Apply your selection by calling MediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync() on the MediaStream.Photo and passing the encoding properties you want.
More information in this thread. An SDK sample is available here.
Related
First time trying to render something and I have big troubles... I am using DirectN library and SwapChainSurface class from KlearTouch.MediaPlayer. I am trying to render BGRA32 frame using D3D11Device.
For this I have slightly modified OnNewSurfaceAvailable:
public void OnNewSurfaceAvailable2(Action<ID3D11Device, ID3D11DeviceContext> updateSurface)
{
if (rendering)
{
return;
}
try
{
if (this.swapChain is null || swapChainComObject is null)
{
return;
}
swapChainComObject.GetDesc(out var swapChainDesc).ThrowOnError();
if (swapChainDesc.BufferDesc.Width != PanelWidth || swapChainDesc.BufferDesc.Height != PanelHeight)
{
swapChainComObject.ResizeBuffers(2, PanelWidth, PanelHeight, DXGI_FORMAT.DXGI_FORMAT_UNKNOWN, 0).ThrowOnError();
}
var device = swapChain.Object.GetDevice1().Object.As<ID3D11Device>();
device.GetImmediateContext(out var context);
// context.ClearRenderTargetView(renderTargetView.Object, new []{0f, 1f, 1f, 1f});
updateSurface(device, context);
swapChainComObject.Present(1, 0).ThrowOnError();
}
catch (ObjectDisposedException)
{
Reinitialize();
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("\nException: " + ex, nameof(SwapChainSurface) + '.' + nameof(OnNewSurfaceAvailable));
}
rendering = false;
}
OnSurfaceAvailable2 is called from:
void VideoFrameArrived(Bgra32VideoFrame frame)
{
DispatcherQueue.TryEnqueue(() =>
{
previewSurface.OnNewSurfaceAvailable2((device, context) =>
{
var size = frame.m_height * frame.m_height * 4;
D3D11_TEXTURE2D_DESC td;
td.ArraySize = 1;
td.BindFlags = (uint) D3D11_BIND_FLAG.D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE.D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = (uint) D3D11_CPU_ACCESS_FLAG.D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT.DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = (uint) frame.m_height;
td.Width = (uint) frame.m_width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srd;
srd.pSysMem = frame.m_pixelBuffer;
srd.SysMemPitch = (uint) frame.m_height;
srd.SysMemSlicePitch = 0;
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
var mappedResource = context.Map(texture.Object, 0, D3D11_MAP.D3D11_MAP_WRITE_DISCARD);
var mappedData = mappedResource.pData;
unsafe
{
Buffer.MemoryCopy(frame.m_pixelBuffer.ToPointer(), mappedData.ToPointer(), size, size);
}
// Just for debug
var pixelsInFrame = new byte[size];
var pixelsInResource = new byte[size];
Marshal.Copy(frame.m_pixelBuffer, pixelsInFrame, 0, size);
Marshal.Copy(mappedResource.pData, pixelsInResource, 0, size);
context.Unmap(texture.Object, 0);
});
});
}
Problem is that I can't see anything rendered and surface stay black and I assume it should not be.
Update: Project repository
Update 2:
I solved my issue. I had too little knowledge about DX11 so I had to study more how things work there. With this knowledge I updated repository which can display preview from black magic design card. It is just example with many issues so be careful and feel free to look for or inspiration there.
There's a various amount of issues here.
First on frame arrived, you have
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
So your create a texture, but you do not use it anywhere, it needs to be blitted to the swapchain (can do a CopyResource on device context or draw a full screen triangle/quad).
Note that CopyResource will only work if your swapchain has the same size as your incoming texture, which is rather unlikely, so you will have to draw a blit with a shader most likely.
Also you actually copying the data in the texture twice :
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
Since you provide initial data, the content is already there.
also, pitch is incorrect :
srd.SysMemPitch = (uint) frame.m_height;
pitch is the length (in bytes) of a line, so it should be :
srd.SysMemPitch = frame.GetRowBytes();
Please also note that in case of a non converted Decklink frame,
GetRowBytes can be different from width*4 (they can align row size to multiple of 16/32 or other values).
Next, in the case of resource map, the following is also incorrect :
unsafe
{
Buffer.MemoryCopy(frame.m_pixelBuffer.ToPointer(), mappedData.ToPointer(), size, size);
}
You are not checking the pitch/stride requirement of a texture (which can be different as well),
so you need to do :
if (mappedResource.RowPitch == frame.GetRowBytes())
{
//here you can use a direct copy as above
}
else
{
//here you need to copy data line per line
}
I am trying to generate a gif using Bumpkit gif encoder and while the gif works (Except for the first frames acting out), when I try to load the gif in photoshop, it says "Could not complete request because the file-format module cannot parse the file".
I don't know how to check the validity of the gif because it works when I view it. This is how Im using the Bumpkit library:
public void SaveImagesAsGif(Stream stream, ICollection<Bitmap> images, float fps, bool loop)
{
if (images == null || images.ToArray().Length == 0)
{
throw new ArgumentException("There are no images to add to animation");
}
int loopCount = 0;
if (!loop)
{
loopCount = 1;
}
using (var encoder = new BumpKit.GifEncoder(stream, null, null, loopCount))
{
foreach (Bitmap bitmap in images)
{
encoder.AddFrame(bitmap, 0, 0, TimeSpan.FromSeconds(1 / fps));
}
}
stream.Position = 0;
}
Am I doing something wrong when generating the gif?
When you are using the Bumpkit gif encoder library, I think you have to call the InitHeader function first.
Take from the GifEncoder.cs source:
private void InitHeader(Stream sourceGif, int w, int h)
You can see the source code for the InitHeader function, the AddFrame function and the rest of the GifEncoder.cs file at https://github.com/DataDink/Bumpkit/blob/master/BumpKit/BumpKit/GifEncoder.cs
So it's a small edit to your code:
public void SaveImagesAsGif(Stream stream, ICollection<Bitmap> images, float fps, bool loop)
{
if (images == null || images.ToArray().Length == 0)
{
throw new ArgumentException("There are no images to add to animation");
}
int loopCount = 0;
if (!loop)
{
loopCount = 1;
}
using (var encoder = new BumpKit.GifEncoder(stream, null, null, loopCount))
{
//calling initheader function
//TODO: Change YOURGIFWIDTHHERE and YOURGIFHEIGHTHERE to desired width and height for gif
encoder.InitHeader(stream, YOURGIFWIDTHHERE, YOURGIFHEIGHTHERE);
foreach (Bitmap bitmap in images)
{
encoder.AddFrame(bitmap, 0, 0, TimeSpan.FromSeconds(1 / fps));
}
}
stream.Position = 0;
}
I have a problem with SetRecordRotation in UWP, I was wondering is there a possibility to increase a setRecordingRotation(VideoRotation.Clockwise180Degrees) to setRecordingRotation(VideoRotation.Clockwise360Degrees) in case not only to double it or to flip video or mirroring like in Skype.
I am creating an app that the need to record with a preview like in SKYPE, below it's my code any suggestion
private async Task InitializeCameraAsync()
{
Debug.WriteLine("InitializeCameraAsync");
if (mc == null)
{
// Attempt to get the back camera if one is available, but use any camera device if not
var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
//StorageFile sampleFile = await localFolder.GetFileAsync("proporties.txt");
//String timestamp = await FileIO.ReadTextAsync(sampleFile);
var cameraDevice = localSettings.Values["camValue"].ToString();
if (allVideoDevices == null)
{
Debug.WriteLine("No camera device found!");
return;
}
// Create MediaCapture and its settings
mc = new MediaCapture();
var settings = new MediaCaptureInitializationSettings { VideoDeviceId = allVideoDevices[int.Parse(cameraDevice)].Id };
await mc.InitializeAsync(settings);
//CaptureElement.RenderTransform = new ScaleTransform { ScaleX = -1 };
//_isInitialized = true;
SetResolution();
DisplayInformation displayInfo = DisplayInformation.GetForCurrentView();
displayInfo.OrientationChanged += DisplayInfo_OrientationChanged;
DisplayInfo_OrientationChanged(displayInfo, null);
stream = new InMemoryRandomAccessStream();
llmr = await mc.PrepareLowLagRecordToStreamAsync(MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto), stream);
//mc.SetPreviewRotation(VideoRotation.Clockwise180Degrees);
// mc.SetRecordRotation(rotationAngle);
//CaptureElement.RenderTransform = new ScaleTransform()
//{
//ScaleX = 1
//};
//mc.SetPreviewMirroring(_mirroringPreview);
//SetPreviewRotationAsync();
-> i want it to be VideoRotation.Clockwise360Degrees not instead Clockwise180Degrees is there any way to increas
mc.SetRecordRotation(VideoRotation.Clockwise180Degrees);
await llmr.StartAsync();
await llmr.StopAsync();
CaptureElement.Source = mc;
CaptureElement.FlowDirection = _mirroringPreview ? FlowDirection.LeftToRight : FlowDirection.RightToLeft;
CaptureStack.Visibility = Visibility.Visible;
//if (localSettings.Values.ContainsKey("camValue") == false)
//{
//CameraErrorTextBlock.Visibility = Visibility.Visible;
//}
RecordProgress.Visibility = Visibility.Visible;
CaptureGrid.Visibility = Visibility.Visible;
CancelButton.HorizontalAlignment = HorizontalAlignment.Right;
//CaptureElement.FlowDirection = FlowDirection.LeftToRight;
//Prepare low lag recording
stream = new InMemoryRandomAccessStream();
//var encodingProperties = (CaptureElement.Tag as StreamResolution).EncodingProperties;
var encodingProfile= MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto);
// Calculate rotation angle, taking mirroring into account if necessary
//var rotationAngle = VideoRotation.Clockwise180Degrees + VideoRotation.Clockwise180Degrees;
//mc.SetRecordRotation(rotationAngle);
//var rotationAngle = 360 - ConvertDeviceOrientationToDegrees(GetCameraOrientation());
//encodingProfile.Video.Properties.Add(RotationKey, mc.SetRecordRotation(rotationAngle));
llmr = await mc.PrepareLowLagRecordToStreamAsync(encodingProfile, stream);
await mc.StartPreviewAsync();
}
else if (mc != null)
{
//if (localSettings.Values.ContainsKey("camValue") == true)
//{
CameraErrorTextBlock.Visibility = Visibility.Visible;
//}
}
}
Two things that I would like to call out:
Rotating anything 360 degrees is the same as rotating it 0 degrees, which means it will remain unchanged. What you want is to flip it horizontally, to mirror it.
Apps like Skype only do this for the user-side preview, not for the stream transmitted to the other endpoint, which remains unchanged. The reason for this is that if the user holds up something like text, the receiver should be able to see it the way it is. Reading mirrored text is a lot harder.
So, even though I said you should do mirroring instead of rotating 360 degrees, in reality you shouldn't do anything at all to the video capture stream in order to provide the best experience.
Finally, to mirror the preview, the easiest way is to use the FlowDirection property of the CaptureElement (for C# or C++), or alternatively use a transform of x:-1 y:1 on the style of the video element (for JS):
cameraPreview.style.transform = "scale(-1, 1)";
or a RenderTransform (for C# or C++). For a reference on how to mirror the preview, you can check out the CameraStarterKit sample on github, which covers C#, C++, JS and VB.
I'm working on a service for a company project that handles image processing, and one of the methods is supposed to clean the metadata from an image passed to it.
I think implementation I currently have works, but I'm not sure if it's affecting the quality of images or if there's a better way to handle this task. Could you let me know if you know of a better way to do this?
Here's the method in question:
public byte[] CleanMetadata(byte[] data)
{
Image image;
if (tryGetImageFromBytes(data, out image))
{
Bitmap bitmap = new Bitmap(image);
using (var graphics = Graphics.FromImage(bitmap))
{
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphics.DrawImage(image, new Point(0, 0));
}
ImageConverter converter = new ImageConverter();
return (byte[])converter.ConvertTo(image, typeof(byte[]));
}
return null;
}
And, for reference, the tryGetImageFromBytes method:
private bool tryGetImageFromBytes(byte[] data, out Image image)
{
try
{
using (var ms = new MemoryStream(data))
{
image = Image.FromStream(ms);
}
}
catch (ArgumentException)
{
image = null;
return false;
}
return true;
}
To reiterate: is there a better way to remove metadata from an image that doesn't involve redrawing it?
Thanks in advance.
The .NET way: You may want to try your hand at the System.Windows.Media.Imaging.BitmapEncoder class - more precisely, its Metadata collection. Quoting MSDN:
Metadata - Gets or sets the metadata that will be associated with this
bitmap during encoding.
The 'Oops, I (not so accidentally) forgot something way: Open the original bitmap file into a System.drawing.Bitmap object. Clone it to a new Bitmap object. Write the clone's contents to a new file. Like this one-liner:
((System.Drawing.Bitmap)System.Drawing.Image.FromFile(#"C:\file.png").Clone()).Save(#"C:\file-nometa.png");
The direct file manipulation way (only for JPEG): Blog post about removing the EXIF area.
I would suggest this, the source is here: Removing Exif-Data for jpg file
Changing a bit the 1st function
public Stream PatchAwayExif(Stream inStream)
{
Stream outStream = new MemoryStream();
byte[] jpegHeader = new byte[2];
jpegHeader[0] = (byte)inStream.ReadByte();
jpegHeader[1] = (byte)inStream.ReadByte();
if (jpegHeader[0] == 0xff && jpegHeader[1] == 0xd8) //check if it's a jpeg file
{
SkipAppHeaderSection(inStream);
}
outStream.WriteByte(0xff);
outStream.WriteByte(0xd8);
int readCount;
byte[] readBuffer = new byte[4096];
while ((readCount = inStream.Read(readBuffer, 0, readBuffer.Length)) > 0)
outStream.Write(readBuffer, 0, readCount);
return outStream;
}
And the second function with no changes, as post
private void SkipAppHeaderSection(Stream inStream)
{
byte[] header = new byte[2];
header[0] = (byte)inStream.ReadByte();
header[1] = (byte)inStream.ReadByte();
while (header[0] == 0xff && (header[1] >= 0xe0 && header[1] <= 0xef))
{
int exifLength = inStream.ReadByte();
exifLength = exifLength << 8;
exifLength |= inStream.ReadByte();
for (int i = 0; i < exifLength - 2; i++)
{
inStream.ReadByte();
}
header[0] = (byte)inStream.ReadByte();
header[1] = (byte)inStream.ReadByte();
}
inStream.Position -= 2; //skip back two bytes
}
Creating a new bitmap will clear out all the exif data.
var newImage = new Bitmap(image);
If you want to remove only specific info:
private Image RemoveGpsExifInfo(Image image)
{
foreach (var item in image.PropertyItems)
{
// GPS range is from 0x0000 to 0x001F. Full list here -> https://exiftool.org/TagNames/EXIF.html (click on GPS tags)
if (item.Id <= 0x001F)
{
image.RemovePropertyItem(item.Id);
}
}
return image;
}
We have a method to Convert icon to given size, which likes below:
private BitmapFrame GetSizedSource(Icon icon, int size)
{
var stream = IconToStream(icon);
var decoder = BitmapDecoder.Create(stream, BitmapCreateOptions.DelayCreation, BitmapCacheOption.OnDemand);
var frame = decoder.Frames.SingleOrDefault(_ => Math.Abs(_.Width - size) < double.Epsilon);
return frame;
}
private Stream IconToStream(Icon icon)
{
using (var stream = new MemoryStream())
{
icon.Save(stream);
stream.Position = 0;
return stream;
}
}
As we pass the icon, which height/width is 32, and parameter size is 32.
Actually, the decoder.Frame[0] width/height is 1.0, I don't know why?
Did I miss something?
Problem is in IconToStream which creates MemoryStream, copies icon into it, returns reference and then disposes all resources allocated by MemoryStream which effectively makes your stream and therefore Frame empty. If you would change GetSizedSource to something like below that returns BitmapFrame before desposing MemoryStream is should work:
private BitmapFrame GetSizedSource(Icon icon, int size)
{
using (var stream = new MemoryStream())
{
icon.Save(stream);
stream.Position = 0;
return BitmapDecoder.Create(stream, BitmapCreateOptions.DelayCreation, BitmapCacheOption.OnDemand)
.Frames
.SingleOrDefault(_ => Math.Abs(_.Width - size) < double.Epsilon);
}
}