I am working at an app where I need to get the array of pixels from an Image and to edit the Image using the pixels array.
I am using the next code for getting the pixels array from the StorageFile object which indicates the image:
public static async Task<byte[]> GetPixelsArrayFromStorageFileAsync(
IRandomAccessStreamReference file)
{
using (IRandomAccessStream stream = await file.OpenReadAsync())
{
using (var reader = new DataReader(stream.GetInputStreamAt(0)))
{
await reader.LoadAsync((uint)stream.Size);
var pixelByte = new byte[stream.Size];
reader.ReadBytes(pixelByte);
return pixelByte;
}
}
}
Now, my questions are:
Why if I load a image which is 6000 x 4000 pixels I have an array of just 8,941,799 which is actually the size of my image on disk?
How can I access the RGBA channels of the pixels?
Your file has a compressed version of the bitmap, so you need to decode it first. I'd suggest loading it into a WriteableBitmap since you need to display it anyway and then access the PixelBuffer property of the bitmap to get the actual pixels. You could do something like this:
var writeableBitmap = new WriteableBitmap(1, 1);
await writeableBitmap.SetSourceAsync(yourFileStream);
var pixelStream = writeableBitmap.PixelBuffer.AsStream();
var bytes = new byte[pixelStream.Length];
pixelStream.Seek(0, SeekOrigin.Begin);
pixelStream.Read(bytes, 0, Bytes.Length);
// Update the bytes here. I think they follow the BGRA pixel format.
pixelStream.Seek(0, SeekOrigin.Begin);
pixelStream.Write(bytes, 0, bytes.Length);
writeableBitmap.Invalidate();
You can check the extension methods here and here to see how to work with the pixels.
Related
I'm attempting to create a small OCR application where I can select words. I have already implemented this using the Microsoft example:
protected async override void OnNavigatedTo(NavigationEventArgs e)
{
// Load image from install folder.
var file = await Package.Current.InstalledLocation.GetFileAsync("photo.png");
using (var stream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read))
{
// Create image decoder.
var decoder = await BitmapDecoder.CreateAsync(stream);
// Load bitmap.
var bitmap = await decoder.GetSoftwareBitmapAsync();
// Extract text from image.
OcrResult result = await ocrEngine.RecognizeAsync(bitmap);
// Display recognized text.
OcrText.Text = result.Text;
}
}
However, I want the loaded image to fit to the screen, I do this by simply creating a BitmapImage, setting its source to the stream and applying the source of an Image element to the bitmap. This problem with this, is that coords of the OCR words are not aligned to the image, as I presume the bitmap created from the decoder uses the original image size. How can I rectify this issue, so that the size of the image the OCR engine uses is the same as Image element?
My code to get the image, and to get an OCR result from:
// Create a stream from the picked file
IRandomAccessStream stream = await file.OpenAsync(FileAccessMode.Read);
// Create a bitmap from the stream
BitmapImage bitmap = new BitmapImage();
bitmap.SetSource(stream);
this.imgDisplay.Source = bitmap;
// Create the decoder from the stream
var decoder = await BitmapDecoder.CreateAsync(stream);
// Get the SoftwareBitmap representation of the file
SoftwareBitmap sBitmap = await decoder.GetSoftwareBitmapAsync();
mOcrResult = await mOcrEngine.RecognizeAsync(sBitmap);
I am developing an image processing app in uwp windows 10. I am opening an image using file picker as shown below.
FileOpenPicker openPicker = new FileOpenPicker();
openPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
openPicker.ViewMode = PickerViewMode.Thumbnail;
openPicker.FileTypeFilter.Clear();
openPicker.FileTypeFilter.Add(".bmp");
openPicker.FileTypeFilter.Add(".png");
openPicker.FileTypeFilter.Add(".jpeg");
openPicker.FileTypeFilter.Add(".jpg");
StorageFile file = await openPicker.PickSingleFileAsync();
if(file!=null)
{
IRandomAccessStream fileStream = await file.OpenAsync(FileAccessMode.Read);
BitmapImage bitmapImage = new BitmapImage();
bitmapImage.SetSource(fileStream);
myImage.Source = bitmapImage;
// code to retrieve bytes of bitmap image
}
Inside above if statement, I am retrieving bytes from this image like shown below.
//Fetching pixel data
using (IRandomAccessStream fileStreams = await file.OpenAsync(Windows.Storage.FileAccessMode.Read))
{
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStreams);
BitmapTransform transform = new BitmapTransform()
{
ScaledWidth = Convert.ToUInt32(bitmapImage.PixelWidth),
ScaledHeight = Convert.ToUInt32(bitmapImage.PixelHeight)
};
PixelDataProvider pixelData = await decoder.GetPixelDataAsync(
BitmapPixelFormat.Rgba8,
BitmapAlphaMode.Straight,
transform,
ExifOrientationMode.IgnoreExifOrientation,// This sample ignores Exif orientation
ColorManagementMode.DoNotColorManage
);
// byte[] , a global variable
sourcePixels = pixelData.DetachPixelData();
// uint , a global variable
width = decoder.PixelWidth;
// uint , a global variable
height = decoder.PixelHeight;
}
Now I need to manipulate this byte array for generating different effects. But for testing purpose, I am converting this byte array, again to bitmapimage and setting its value to main image source (in button click event). but it is not working correctly as
WriteableBitmap scaledImage = new WriteableBitmap((int)width, (int)height);
using (Stream stream = scaledImage.PixelBuffer.AsStream())
{
await stream.WriteAsync(sourcePixels, 0, sourcePixels.Length);
myImage.Source = scaledImage;
}
when the image was opened, it was like this.
when applied again after changing it to byte array and byte array to image source. It changes the image colors, although i haven't changed any values of byte array.
Where is the problem?? Whether the conversion to byte array is wrong or conversion of byte array to bitmap?
Solution:
Well, I have found the solution of this issue, it was that I was using BitmapPixelFormat.Rgba8 in PixelDataProvider (while fetching pixels data). Rather I should use BitmapPixelFormat.Bgra8.
I am having trouble rendering PNGs that use Palette as "Color Type". Here is some simple code to reproduce the issue:
private async System.Threading.Tasks.Task Fetch()
{
HttpClient httpClient = new HttpClient();
Uri uri = new Uri("http://static.splashnology.com/articles/How-to-Optimize-PNG-and-JPEG-without-Quality-Loss/PNG-Palette.png");
HttpResponseMessage response = await httpClient.GetAsync(uri);
if (response.StatusCode == HttpStatusCode.Ok)
{
try
{
var content = await response.Content.ReadAsBufferAsync();
WriteableBitmap image = await BitmapFactory.New(1, 1).FromStream(content.AsStream());
Rect destination = new Rect(0, 0, image.PixelWidth, image.PixelHeight);
Rect source = new Rect(0, 0, image.PixelWidth, image.PixelHeight);
WriteableBitmap canvas = new WriteableBitmap(image.PixelWidth, image.PixelHeight);
canvas.Blit(destination, image, source);
RadarImage.Source = canvas;
}
catch (Exception e)
{
System.Diagnostics.Debug.WriteLine(e.Message);
System.Diagnostics.Debug.WriteLine(e.StackTrace);
}
}
}
If I run that code using Windows Phone 8.1, the image appears using wrong colors. If I do the same test using a PNG that is using RGB as "Color Type", then everything is works fine.
I have looked at Codeplex forum and haven't seen any post related to that. I have reported it as an issue although it might be related to the way I'm rendering it. Is there any mistake in the way I'm using WriteableBitmap that could cause the wrong rendering?
UPDATE
According to this discussion
https://writeablebitmapex.codeplex.com/discussions/274445
the issue is related to an unexpected order of the bytes. These comments are from one and a half years ago, so I think there should be a proper fix somewhere...
The image that is rendered incorrectly is in the code above.
This one, using the same code, is rendered correctly.
http://www.queness.com/resources/images/png/apple_ex.png
The only difference between these two images is the "Color Type" property. The one that fails is set to "Palette" and the one rendered correctly is set to "RGB Alpha".
Thanks!
Carlos.
The problem appears to be in the FromStream extension, which seems to translate the paletted png to RGBA. As you note, WriteableBitmap wants BGRA. I suspect FromStream passes non-palette pngs' pixels unswizzled. This lets the apple start and finish as BGRA while the monkey ends up RGBA and draws with red and blue reversed.
You can bypass this problem by skipping FromStream and using a BitmapDecoder so you specify the format you want it to decode into:
// Read the retrieved image's bytes and write them into an IRandomAccessStream
IBuffer buffer = await response.Content.ReadAsBufferAsync();
var randomAccessStream = new InMemoryRandomAccessStream();
await randomAccessStream.WriteAsync(buffer);
// Decode the downloaded image as Bgra8 with premultiplied alpha
// GetPixelDataAsync lets us pass in the desired format and it'll do the magic to translate as it decodes
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(randomAccessStream);
var pixelData = await decoder.GetPixelDataAsync(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied, new BitmapTransform(), ExifOrientationMode.IgnoreExifOrientation, ColorManagementMode.DoNotColorManage);
// Get the decoded bytes
byte[] imageData = pixelData.DetachPixelData();
// And stick them in a WriteableBitmap
WriteableBitmap image = new WriteableBitmap((int)decoder.PixelWidth,(int) decoder.PixelHeight);
Stream pixelStream = image.PixelBuffer.AsStream();
pixelStream.Seek(0, SeekOrigin.Begin);
pixelStream.Write(imageData, 0, imageData.Length);
// And stick it in an Image so we can see it.
RadarImage.Source = image;
I crop the circular area from the image for the avatar. I need to get pixels byte[] of the image and upload to the server in base64 format. Unfortunately method SaveJpeg() does not support transparency pixels outside the selected circle. I tried ImageTools library, but no other platform except WindowsPhone unable to create png image from the resulting byte[]. Is there a way to do this?
There is no platform API to do this. The ToolStack PNG library presents a lightweight solution.
http://toolstack.com/libraries/pngwriter
This code worked for me. Before you try, make sure that your writablebitmap has a transparent background (You can check by assigning to a image controller image source). If not, make the background transparent from the controller it was coming from.
var localFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
var file = await localFolder.CreateFileAsync("temp.png", CreationCollisionOption.ReplaceExisting);
using (var ras = await file.OpenAsync(FileAccessMode.ReadWrite, StorageOpenOptions.None))
{
WriteableBitmap bitmap = imageSource;
var stream = bitmap.PixelBuffer.AsStream();
byte[] buffer = new byte[stream.Length];
await stream.ReadAsync(buffer, 0, buffer.Length);
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, ras);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Straight, (uint)bitmap.PixelWidth, (uint)bitmap.PixelHeight, 96.0, 96.0, buffer);
await encoder.FlushAsync();
}
Take a look at it!
I'm running into an issue where I'm trying to copy the pixel buffer for one WriteableBitmap over to another WriteableBitmap essentially giving a copy of the WriteableBitmap object. However, when I try to do this I run into an issue where the second WriteableBitmap's stream length is too short to hold all the values of the first WriteableBitmap.
I posted my code below. Keep in mind that I'm capturing the original data from a webcam. However, when I compare the "ps" object's stream size to wb1 and wb2, ps's size is much smaller than both of them. What I'm confused about is why wb2 stream size is smaller than wb1's. Thanks for any help.
private MemoryStream originalStream = new MemoryStream();
WriteableBitmap wb1 = new WriteableBitmap((int)photoBox.Width, (int)photoBox.Height);
WriteableBitmap wb2 = new WriteableBitmap((int)photoBox.Width, (int)photoBox.Height);
ImageEncodingProperties imageProperties = ImageEncodingProperties.CreateJpeg();
var ps = new InMemoryRandomAccessStream();
await mc.CapturePhotoToStreamAsync(imageProperties, ps);
await ps.FlushAsync();
ps.Seek(0);
wb1.SetSource(ps);
(wb1.PixelBuffer.AsStream()).CopyTo(originalStream); // this works
originalStream.Position = 0;
originalStream.CopyTo(wb2.PixelBuffer.AsStream()); // this line gives me the error: "Unable to expand length of this stream beyond its capacity"
Image img = new Image();
img.Source = wb2; // my hope is to treat this as it's own entity and modify this image independently of wb1 or originalStream
photoBox.Source =wb1;
Note that when you do new WriteableBitmap(w, h) and then call SetSource() to an image of a different resolution - the bitmap's size will change (it won't be the w x h passed in the constructor). It's likely that your photoBox.Width/Height are different than what your CapturePhotoToStreamAsync() call returns (I am assuming the image is captured at the default or preconfigured camera settings, while photoBox is just a control on screen).
How about just doing someting like this
ps.Seek(0);
wb1.SetSource(ps);
ps.Seek(0);
wb2.SetSource(ps);
I think you should create a writter from the PixelBuffer and use it to copy the stream.
The AsStream method should be used to read the buffer, not to write into it.
Have a look to
http://social.msdn.microsoft.com/Forums/en-NZ/winappswithcsharp/thread/2b499ac5-8bc8-4259-a144-842bd756bfe2
for a piece of code