A friend and I spent the better part of last night nearly tearing our hair out trying to work with some images in a metro app. We got images into the app with the share charm, and then I wanted to do some other work with them, cropping the images and saving them back into the appdata folder. This proved extremely frustrating.
My question, at the end of all this, is going to be "What's the proper way of doing this, without feeling like I'm hammering together a bunch of mismatched jigsaw puzzle pieces?"
When sharing multiple images with the app, they come in as a list of Windows.Storage.StorageFiles. Here's some code used to handle that.
var storageItems = await _shareOperation.Data.GetStorageItemsAsync();
foreach (StorageFile item in storageItems)
{
var stream = await item.OpenReadAsync();
var properties = await item.Properties.GetImagePropertiesAsync();
var image = new WriteableBitmap((Int32)properties.Width, (Int32)properties.Height);
image.SetSource(stream);
images.Add(image);
}
Some searching online has indicated that currently, a Windows.UI.Xaml.Media.Imaging.WriteableBitmap is the only thing capable of letting you access the pixel data in the image. This question includes a helpful answer full of extension methods for saving images to a file, so we used those.
Our problems were the worst when I tried opening the files again later. I did something similar to before:
var files = await ApplicationData.Current.LocalFolder.GetFilesAsync();
foreach (var file in files)
{
var fileStream = await file.OpenReadAsync();
var properties = await file.Properties.GetImagePropertiesAsync();
var bitmap = new WriteableBitmap((Int32)properties.Width, (Int32)properties.Height);
bitmap.SetSource(fileStream);
System.IO.Stream stream = bitmap.PixelBuffer.AsStream();
Here comes a problem. How long is this stream, if I want the bytes out of it?
// CRASH! Length isn't supported on an IRandomAccessStream.
var pixels = new byte[fileStream.Length];
Ok try again.
var pixels = new byte[stream.Length];
This works, except... if the image is compressed, the stream is shorter than you would expect, so you will eventually get an out of bounds exception. For now pretend it's an uncompressed bitmap.
await _stream.ReadAsync(pixels, 0, pixels.Length);
Well guess what. Even though I said bitmap.SetSource(fileStream); in order to read in the data, my byte array is still full of zeroes. I have no idea why. If I pass this same bitmap into a my UI through the sample data group, the image shows up just fine. So it has clearly got the pixel data in that bitmap somewhere, but I can't read it out of bitmap.PixelBuffer? Why not?
Finally, here's what ended up actually working.
var decoder = await BitmapDecoder.CreateAsync(BitmapDecoder.PngDecoderId, fileStream);
var data = await decoder.GetPixelDataAsync();
var bytes = data.DetachPixelData();
/* process my data, finally */
} // end of that foreach I started a while ago
So now I have by image data, but I still have a big problem. In order to do anything with it, I have to make assumptions about its format. I have no idea whether it's rgba, rgb, abgr, bgra, whatever they can be. If I guess wrong my processing just fails. I've had dozens of test runs spit out zero byte and corrupted images, upside down images (???), wrong colors, etc. I would have expected to find some of this info in the properties that I got from calling await file.Properties.GetImagePropertiesAsync();, but no luck. That only contains the image width and height, plus some other useless things. Minimal documentation here.
So, why is this process so painful? Is this just reflecting the immaturity of the libraries right now, and can I expect it to get better? Or is there already some standard way of doing this? I wish it were as easy as in System.Drawing. That gave you all the data you ever needed, and happily loaded any image type correctly, without making you deal with streams yourself.
From what I have seen - when you are planning on loading the WriteableBitmap with a stream - you don't need to check the image dimensions - just do new WriteableBitmap(1,1), then call SetSource().
Not sure why you were thinking var pixels = new byte[fileStream.Length]; would work, since the fileStream has the compressed image bytes and not a pixel array.
You might need to seek to the beginning of the stream to get the pixels array:
var pixelStream = pixelBuffer.AsStream();
var bytes = new byte[this.pixelStream.Length];
this.pixelStream.Seek(0, SeekOrigin.Begin);
this.pixelStream.Read(bytes, 0, Bytes.Length);
I had started working on a WinRT port of WriteableBitmapEx - maybe it could help you: http://bit.ly/WriteableBitmapExWinRT. I have not tested it well and it is based on an older version of WBX, but it is fairly complete in terms of feature support. Might be a tad slower than it is possible too.
Related
I have a video file that's already loaded in memory (an IFormFile that's converted to a byte[]).
I've been trying to figure out how to take that video and get a thumbnail image from it, without having to read/write to disk.
I found use cases for MediaToolkit and FFmpeg here, and Movie Thumbnailer here, but from what I can find, those require the video already be saved to disk and have write access to output the thumbnail to a file.
Is there any way I can take either an IFormFile or byte[] and do something similar to what MediaToolkit is doing while being able to keep the result in memory?
I know a lot of folks are saying byte[] isn't the way to go. In that case, I'm more than happy to convert from IFormFile to a Stream, but I still need a way to do that and keep it in memory.
please try this with Windows.Media.Editing should work, not fully tested..
int frameHeight; // you can set the height
int frameWidth; // ...
TimeSpan getFrameInTime = new TimeSpan(0, 0, 1);
//Using Windows.Media.Editing get your ImageStream
var yourClip = await MediaClip.CreateFromFileAsync(someFile);
var composition = new MediaComposition();
// add the section of the video as needed
composition.Clips.Add(yourClip);
// Answer - now that your have your imageStream, get the thumbnail
var yourImageStream = await composition.GetThumbnailAsync(
getFrameInTime,
Convert.ToInt32(frameWidth),
Convert.ToInt32(frameHeight),
VideoFramePrecision.NearestFrame);
//now create your thumbnail bitmap
var yourThumbnailBitmap = new WriteableBitmap(
Convert.ToInt32(frameWidth),
Convert.ToInt32(frameHeight)
);
yourThumbnailBitmap.SetSource(yourImageStream);
Good morning StackOverflow,
I come to you today with a scenario that is driving me slowly insane. Im hopeful you can aid me with this, as I’m certain this must be possible but I can’t solve it myself.
My issue is that I’m working on two different applications at present. The first is a UWP application to deliver packages for an internal mail system. The idea here is that upon receipt of the package the person will sign the application using a InkCanvas signature. This should then be saved to the database as a byte array, and then reloaded in either a WinForm or WebForm app (I’m currently doing the WinForm one first) as a regular old image file. However, I’m absolutely stuck on converting between the WriteableBitmap that I get from UWP and the regular Bitmap I need to load in WinForms. Any ideas?
Here’s what I’m doing presently:
Saving the UWP image:
private byte[] SaveImage()
{
var canvasStrokes = SignatureCanvas.InkPresenter.StrokeContainer.GetStrokes();
if (canvasStrokes.Count > 0)
{
var width = (int) SignatureCanvas.ActualWidth;
var height = (int) SignatureCanvas.ActualHeight;
var device = CanvasDevice.GetSharedDevice();
var renderTarget = new CanvasRenderTarget(device, width, height, 96);
using (var drawingSession = renderTarget.CreateDrawingSession())
{
drawingSession.Clear(Colors.White);
drawingSession.DrawInk(SignatureCanvas.InkPresenter.StrokeContainer.GetStrokes());
}
return renderTarget.GetPixelBytes();
}
return null;
}
Then I save the bytes to the database, and pull them from the database in the WinForms app... so am I making some boneheaded mistake here? Am I reading the signature in the wrong format? Or do I need to do something more to convert the formats from one to the other?
I’m stumped, after trying many different results from StackOverflow pages I don’t know what I’m doing wrong.
Any help would be amazing! And sorry if I’ve done something dumb.
You're actually saving raw bitmap data to your database.
I don't remember well how the Winform importer was working but I doubt it can import raw bitmap data.
You should encode your raw data to a PNG or JPEG image first and save the result. You will end with a regular old image file that should be readable from Winform.
using (IRandomAccessStream stream = /* the stream where you want to save the data */)
{
byte[] bytes = renderTarget.GetPixelBytes();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Ignore,
(uint)canvas.Width, (uint)canvas.Height,
96, 96, bytes);
await encoder.FlushAsync();
}
Edit: SOLVED! Please see my answer down below for details.
I was unable to find an answer to the original question but I found an alternate solution
This question may be asked somewhere else but I have been searching for days and can't find anything that helps.
Question: I need to convert "Stream" to "image(bgr, byte)" in one go, Is there a way/command to convert directly from System.Drawing.Image.FromStream to Emgu.CV.Image(Bgr, Byte) without converting from stream to image to bitmap to image(bgr, byte)?
Information: I'm coding in c# in Visual Studio 2010 as part of my dissertation project.
I am taking a image stream from an IP camera on a network and applying many algorithms to detect faces/extract facial features and recognise an individuals face. On my laptops local camera I can achieve FPS of about 25~ (give or take) including algorithms because I don't have to convert the image. For an IP camera stream I need to convert it many times to achieve the desired format and the result is around 5-8fps.
(I know my current method is extremely inefficient which is why I'm here, I'm actually converting an image 5 times total (even gray scaling too), actually only using half of my processors memory (i7, 8gb RAM)). It does have to be image(bgr, byte) as that is the only format the algorithms will function with.
The code I'm using to get the image:
//headers
using System.IO
using System.Threading;
using System.Net;
//request a connection
req = (HttpWebRequest)HttpWebRequest.Create(cameraUrl);
//gives chance for timeout for errors to occur or loss of connection
req.AllowWriteStreamBuffering = true;
req.Timeout = 20000;
//retrieve response (if successfull)
res = req.GetResponse();
//image returned
stream = res.GetResponseStream();
I have alot of stuff in the background managing connections, data, security etc which I have shortened to the above code.
My current code to covert the image to the desired output:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for other uses
gray = currentFrame.Convert<Gray, Byte>();
I understand there is a method to save an image locally temporarily but I would need to avoid that for security purposes. I'm looking more for a direct conversion to help save processing power.
Am I overlooking something? All help is appreciated.
Thanks for reading. (I will update this if anyone requests any more details)
-Dave
You've got a couple potential bottlenecks, not the least of which is that you're probably jpeg decoding the stream into an image and then converting that into a bitmap and then into an openCV image.
One way around this is to bypass the .NET imaging entirely. This would involve trying to use libjpeg directly. There's a free port of it here in C#, and IIRC you can hook into it to get called on a per-scanline basis to fill up a buffer.
The downside is that you're decoding JPEG data in managed code which will run at least 1.5X slower than equivalent the C, although quite frankly I would expect network speed to dwarf this immensely.
OpenCV should be able to read jpeg images directly (wanna guess what they use under the hood? Survey says: libjpeg), which means that you can buffer up the entire stream and hand it to OpenCV and bypass the .NET layer entirely.
I believe I found the answer to my problem. I have dabbled using Vano Maisuradze's idea of processing in memory which improved the fps a tiny margin (not immediately noticable without testing). And also thanks to Plinths answer I have a understanding of Multi-Threading and I can optimise this as I progress as I can split the algorithms up to work in parallel.
What I think is my cause is the networking speed! not the actual algorithm delay. As pointed out by Vano with the stopwatch to find the speed the algorithms didn't actually consume that much. So with and without the algorithms the speed is about the same if I optimise using threading so the next frame is being collected as the previous one finishes processing.
I did some testing on some physical Cisco routers and got the same result if a bit slower messing round with clock speeds and bandwidths which was noticeable. So I need to find out a way to retrieve frames over networks faster, Very big thank you to everyone who answered who helped me understand better!
Conclusion:
Multi-threading to optimise
Processing in memory instead of converting constantly
Better networking solutions (Higher bandwidth and speeds)
Edit: The code to retrieve an image and process in memory for anyone who finds this looking for help
public void getFrames(object sender, EventArgs e)
{//Gets a frame from the IP cam
//Replace "IPADDRESS", "USERNAME", "PASSWORD"
//with respective data for your camera
string sourceURL = "http://IPADDRESS/snapshot.cgi?user=USERNAME&pwd=PASSWORD";
//used to store the image retrieved in memory
byte[] buffer = new byte[640 * 480];
int read, total = 0;
//Send a request to the peripheral via HTTP
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
WebResponse resp = req.GetResponse();
//Get the image capture after recieving a request
//Note: just a screenshot not a steady stream
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}//While End
//Convert memory (byte) to bitmap and store in a picturebox
pictureBox1.Image = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
}//getFrames End
private void button1_Click(object sender, EventArgs e)
{//Trigger an event to start running the function when possible
Application.Idle += new EventHandler(getFrames);
}//Button1_Click End
You can save several image in memory (buffer) and then start processing from buffer.
Something like this:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for later use
gray = currentFrame.Convert<Gray, Byte>();
SaveToBuffer(gray);
Queue<Emgu.CV.Image<Gray, Byte>> buffer = new Queue<Emgu.CV.Image<Gray, Byte>>();
bool canProcess = false;
// ...
private void SaveToBuffer(Emgu.CV.Image<Gray, Byte> img)
{
buffer.Enqueue(img);
canProcess = buffer.Count > 100;
}
private void Process()
{
if(canProcess)
{
buffer.Dequeue();
// Processing logic goes here...
}
else
{
// Buffer is still loading...
}
}
But note that you will need enough RAM to store images in memory and also you should adjust buffer size to meat your requirements.
I have been trying to read a file, and calculate the hash of the contents to find duplicates. The problem is that in Windows 8 (or WinRT or windows store application or however it is called, I'm completely confused), System.IO has been replaced with Windows.Storage, which behaves differently, and is very confusing. The official documentation is not useful at all.
First I need to get a StorageFile object, which in my case, I get from browsing a folder from a file picker:
var picker = new Windows.Storage.Pickers.FolderPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.MusicLibrary;
picker.FileTypeFilter.Add("*");
var folder = await picker.PickSingleFolderAsync();
var files = await folder.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByName);
Now in files I have the list of files I need to index. Next, I need to open that file:
foreach (StorageFile file in files)
{
var filestream = file.OpenAsync(Windows.Storage.FileAccessMode.Read);
Now is the most confusing part: getting the data from the file. The documentation was useless, and I couldn't find any code example. Apparently, Microsoft thought getting pictures from the camera is more important than opening a file.
The file stream has a member ReadAsync which I think reads the data. This method needs a buffer as a parameter and returns another buffer (???). So I create a buffer:
var buffer = new Windows.Storage.Streams.Buffer(1024 * 1024 * 10); // 10 mb should be enough for an mp3
var resultbuffer = await filestream.ReadAsync(buffer, 1024 * 1024 * 10, Windows.Storage.Streams.InputStreamOptions.ReadAhead);
I am wondering... what happens if the file doesn't have enough bytes? I haven't seen any info in the documentation.
Now I need to calculate the hash for this file. To do that, I need to create an algorithm object...
var alg = Windows.Security.Criptography.Core.HashAlgorithmProvider.OpenAlgorithm("md5");
var hashbuff = alg.HashData(resultbuffer);
// Cleanup
filestream.Dispose();
I also considered reading the file in chunks, but how can I calculate the hash like that? I looked everywhere in the documentation and found nothing about this. Could it be the CryptographicHash class type with it's 'append' method?
Now I have another issue. How can I get the data from that weird buffer thing to a byte array? The IBuffer class doesn't have any 'GetData' member, and the documentation, again, is useless.
So all I could do now is wonder about the mysteries of the universe...
// ???
}
So the question is... how can I do this? I am completely confused, and I wonder why did Microsoft choose to make reading a file so... so... so... impossible! Even in Assembly I could figure it out easier than.... this thing.
WinRT or Windows Runtime should not be confused with .NET as it is not .NET. WinRT has access to only a subset of the Win32 API but not to everything like the .NET is. Here is a pretty good article on what are the rules and restrictions in WinRT.
The WinRT in general does not have access to the file system. It works with capabilities and you can allow file access capability but this would restrict your app's access only to certain areas. Here is a good example of how do to file access via WinRT.
I'm using an AsyncFileUpload (AJAX Toolkit) to upload images.
I have a Button which handle the image resizing.
This have worked fine for some time, but not anymore...
protected void BtnUploadImage_Click(object sender, EventArgs e)
{
var imageFileNameRegEx = new Regex(#"(.*?)\.(jpg|jpeg|png|gif)$",
RegexOptions.IgnoreCase);
if (!AsyncFileUpload1.HasFile ||
!imageFileNameRegEx.IsMatch(AsyncFileUpload1.FileName))
{
AsyncFileUpload1.FailedValidation = true;
ErrorLabel.Visible = true;
return;
}
ErrorLabel.Visible = false;
var file = AsyncFileUpload1.PostedFile.InputStream;
var img = Image.FromStream(file, false, false);
...
}
Another thing which I find weird: If I try a image which is smaller than 80kb it works..!
We have tried to restart the server, but no change.
Same code runs fine on my machine. (heard that before ?? :) )
I also tried to save the file on the server, then to get the file trough Image.FromFile(), but then I get "Cannot access a closed file."
How to resolve this ?
I would make sure the stream is positioned at the start:
var file = AsyncFileUpload1.FileContent;
file.Seek(0, SeekOrigin.Begin);
var img = Image.FromFile(file);
Second thing to check: the requestLengthDiskThreshold setting. Unless specified this setting has a default of ... yes, 80 KB.
Note: imo there should be no overall difference whether you use Image to read the file stream directly or if you use an intermediate MemoryStream (other than the fact that in the latter case you actually loads the entire file into memory twice). Either way the original file stream will be read from, thus stream position, CAS rights, file permissions, etc still applies.
Note2: and yes, by all means make sure those resources are disposed properly :)
This is correct, it will not work. The problem is that you are crossing a managed/unmanaged boundary, I recently encountered the same. Other problems are that the stream is not directly there and the Image.FromStream has no idea how to deal with it.
The solution is quite straightforward: read everything from PostedFile into a MemoryStream (just use new MemoryStream()) and use the MemoryStream with the Image.FromStream. This will solve your problem.
Make sure to make proper use of using when you work with Image, Graphics and Streams. All of them implement the IDisposable and in an ASP.NET environment, not using using blocks properly, can and will lead to increased memory usage and other nasty side effect on the long run (and ASP.NET apps do run very long!).
The solution should look something like this:
using(Stream memstr = new MemoryStream())
{
// copy to a memory stream
Stream uploadStream = AsyncFileUpload1.PostedFile.InputStream;
byte[] all = new byte[uploadStream.Length];
uploadStream.Read(all, 0, uploadStream.Length);
memstr.Write(all, 0, uploadStream.Length);
memstr.Seek(0, SeekOrigin.Begin);
using(Graphics g = Graphics.FromStream(memstr))
{
// do your img manipulation, or Save it.
}
}
Update: the crossing managed boundary issue only occurs in the reverse (using Response stream), it seems, not with Upload streams, but I'm not entirely sure.