I built a UWP XAML control that acts as a barcode/qrcode scanner using the zxing.net library (http://zxingnet.codeplex.com/). The control works fine, it previews the camera on the device and then captures a frame and let zxing process it. All a user has to do is to place it in a page and tell it what type of barcode to scan.
I am just facing one problem: How can I limit the scan area to the center of the captured frame? Sometimes there are multiple barcodes in the image and the library returns a result from one of the barcodes but I am interested in the barcode that is in the middle of the frame.
Is this possible with zxing.net? If so, how can I limit the scan area?
I don't know what code are you using. But I can give a hint based on my UWP barcode scanner
Inside CapturePhotoFromCameraAsync() Task you can find code that take "screenshot" frame from camera:
VideoFrame videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, (int)_width, (int)_height);
await mediaCapture.GetPreviewFrameAsync(videoFrame);
You can get there SoftwareBitmap and eben convert to WritableBitmap.
SoftwareBitmap sb = videoFrame.SoftwareBitmap;
WriteableBitmap bitmap = new WriteableBitmap(sb.PixelWidth, sb.PixelHeight);
But now there is another question how to crop WriteableBitmap (you can find solution on SO or MSDN - it's not short) and how to convert back to SoftwareBitmap.
Related
I'm very new to Android development and decided to approach it using Xamarin. I grabbed the Xamarin (C#) Camera2Api sample from Xamarin's website and got it to the point where I can load the camera preview, take a picture and persist that picture to disk. However, when I look at the picture, it's always 640x480, which is fairly square.
I notice in the Camera2Api project that this is set by grabbing the ScalerStreamConfigurationMap from the camera characteristics, and determining the largest supported size. Here's the snippet:
var map = (StreamConfigurationMap)characteristics.Get(CameraCharacteristics.ScalerStreamConfigurationMap);
if (map == null) {
continue;
}
// For still image captures, we use the largest available size.
Size largest = (Size)Collections.Max(Arrays.AsList(map.GetOutputSizes((int)ImageFormatType.Jpeg)),
new CompareSizeByArea());
For whatever reason, the map.GetOutputSizes((int)ImageFormatType.Jpeg)) always returns a maximum of 640x480.
Can anyone shed some light as to why my emulator only supports such a square image, despite the actual camera preview taking up the full width and height of the screen?
When using the Camera APIs it is important to remember that the capture image size is independent of the preview size.So even though your preview may be full width/height of screen this has no bearing on your capture image size.
The reason 640x480 is being returned as your image size is because that is the largest size image that the "device" you are emulating supports, or its a limitation with the emulator itself. If you were to run this test on a real device you would most likely see a different size image than what you are seeing on the emulator. Alternatively you could print a list of supported capture sizes with map.GetOutputSizes((int)ImageFormatType.Jpeg)
and select a size from that list of a 4:3 aspect ratio at 640x480 is not what you want.
So I'm creating a UWP app which includes an InkCanvas, as such, I want to store the contents as a variable so that I can show a preview of the canvas in the menu.
My question is, what is the best bitmap to use, as there appear to be a number of options.
The SoftwareBitmap is included in Windows.Graphics.Imaging, whereas the BitmapImage is part of Windows.UI.XAML.Media.Imaging, both of which are available in a UWP app.
Can the UWP Image class for displaying images use either of these formats?
Which is most appropriate in my case?
So having done much experimenting, it seems for use with the Image control from Windows.UI.XAML.Controls, a SoftwareBitmap works fairly well.
A SoftwareBitmapSource can be assigned to the Image.Source, provided the SoftwareBitmap has BitmapPixelFormat.Bgra8 and BitmapAlphaMode.Premultiplied (or BitmapAlphaMode.None). From the Remarks section on the SoftwareBitmapSource page:
A SoftwareBitmap displayed in a XAML app must be in BGRA pixel format with pre-multiplied alpha values
All works nicely, now to work on scaling the bitmap down as the difference in size between the InkCanvas and Image make the thumbnail/preview look poor.
I want colored portion of image to be placed at center of bitmap image. I was not able to find Aforge filters which can achieve this. Can you please guide how to achieve this(there will be only one color loop every time, like attached image). I have used Aforge through out project, but if EmguCV (OpenCV) can achieve this, I am open to use it.
I am able to achieve this using Aforge library, will provide link, code and result picture below.
Aforge documentation
// create filter
ExtractBiggestBlob filter = new ExtractBiggestBlob( );
// apply the filter
Bitmap biggestBlobsImage = filter.Apply( image );
Because I have only one blob, used filter which can output biggest blob. Now the colored portion occupies total image and it's center, becomes center of image.
I am developing a windows service which processes video input and sends results of interest to a separate platform. I do not need to display the frames in this context. I am having a problem getting the correct camera input.
I want to save a Bitmap resulting retrieved from am EMGU capture object. To make sure that the capture is actually reading the video stream, I save the bitmap to a file, as follows:
Mat frame = mCapture.QueryFrame();
Template = frame.Bitmap;
Template.Save("frozen.jpg");
The capture is initialized as follows:
CvInvoke.UseOpenCL = false;
int index = int.Parse(ConfigurationManager.AppSettings["Index"]);
mCapture = new Capture(index);
One test platform is a Lenovo laptop running 64-bit Windows 10, with a built-in camera and a second camera attached through a USB port. The input of interest is the second camera. However, whatever index I use to open the Capture object, the input comes from the built-in camera.
The other platform is a Meego Pad, running 32-bit Windows 10, with the same camera attached. In this case, I simply get blank frames as video input. For both platforms, running the camera application shows the video input as expected. What is the problem with my initialization of the Capture object?
Further Investigation Shows...
First, I was using the wrong index to create the capture, so that created some of the confusion. But more confusion follows.
When I call the QueryFrame() method in an event delegate, as shown in this simple example I successfully retrieve the frame from the camera. Example code looks like this:
Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
{ //run this until application closed (close button click on image viewer)
viewer.Image = capture.QueryFrame(); //draw the image obtained from camera
});
viewer.ShowDialog(); //show the image viewer
When I call the same method in a different thread (in response to a communication event) I get an empty image. On the gripping hand, when I call the method in a timer callback, I get the correct image.
I won't call this closed, because I would still like to know why QueryFrame() acts correctly in some threads, but not in others. However, I can work with this, so the question is now mostly academic.
I am making windows phone app using nokia imaging sdk and example of app is this real time blend demo
I am trying to capture image with Image overlayed image i.e image with other image in top of it as in above example in live camera stream below is code i am trying to capture image with effect
CameraCaptureSequence cameraCaptureSequence = App.Camera.CreateCaptureSequence(1);
MemoryStream stream = new MemoryStream();
cameraCaptureSequence.Frames[0].CaptureStream = stream.AsOutputStream();
await App.Camera.PrepareCaptureSequenceAsync(cameraCaptureSequence);
await cameraCaptureSequence.StartCaptureAsync();
stream.Seek(0, SeekOrigin.Begin);
MediaLibrary library = new MediaLibrary();
library.SavePictureToCameraRoll("picture1.jpg", stream);
but the above code only saves image without effect, so how to capture images with live blended effects from camera.
Basically what you have to do is attach the same effects/filters that you had in the preview to a new image source taking the captured photo stream instead. And probably use a different renderer too.
Either that or set up a duplicate set of filters for the capture. There are reasons to, you could e.g. configure lower quality effects in the preview to help performance.