creating a video from frames - c#

In my windows universal app, I'm trying to use a WinRT component: http://blogs.msdn.com/b/eternalcoding/archive/2013/03/06/developing-a-winrt-component-to-create-a-video-file-using-media-foundation.aspx (which is basically a C++ wrapper for sinkWriter)
to create a video with frames.
I put all this code in a C++ project and I can call it from my C# code without problem.
The problem come with the constructor first:
HRESULT CVideoGenerator::InitializeSinkWriter(Windows::Storage::Streams::IRandomAccessStream^ stream)
I'm not sure of how to create the stream:
var filename = "exportedVideo.wmv";
var folder = Windows.Storage.KnownFolders.VideosLibrary;
StorageFile storageFile = await folder.CreateFileAsync(filename, CreationCollisionOption.ReplaceExisting);
IRandomAccessStream stream = await storageFile.OpenAsync(FileAccessMode.ReadWrite);
StorageFile file = await StorageFile.GetFileFromPathAsync(App.PhotoModel.Path);
CVideoGenerator videoGenerator = new CVideoGenerator(1280, 720, stream, 20);
The other thing is coming from this line:
hr = sinkWriter->SetInputMediaType(streamIndex, mediaTypeIn, NULL);
//hr 0xc00d5212 : No suitable transform was found to encode or decode the content. HRESULT
Any ideas ?

I've used this VideoGenerator sample, and got the same problem.
I'm not an expert of Media Foundation, but after some researchs, I've found that the problem was at these lines :
encodingFormat = MFVideoFormat_WMV3;
inputFormat = MFVideoFormat_RGB32;
Well, I've replaced the first format by the second one, like this :
encodingFormat = MFVideoFormat_RGB32;
inputFormat = MFVideoFormat_RGB32;
And it seems to work till the new exception in the WriteSample methods
hr = MFCopyImage(
pData, // Destination buffer.
cbWidth, // Destination stride.
(BYTE*)videoFrameBuffer, // First row in source image.
cbWidth, // Source stride.
cbWidth, // Image width in bytes.
videoHeight // Image height in pixels.
);
Apparently an Access Violation while writing in the memory.
Still trying to figure it out !
McSime

Related

How can I set PdfFormField image in iTextSharp 7 and release the file after using it?

I'm having a lot of trouble trying to set a PdfFormField image and release the file used by the itext process. Here is what I'm doing now (setting the image itself works just fine, but the file still being used by the process...):
public void SetImageField(string idFormField, string imagePath)
{
PdfButtonFormField pdfButtonFormField = (PdfButtonFormField)_pdfAcroForm
.GetField(idFormField);
if (pdfButtonFormField == null)
throw new InstanceNotFoundException("Não foi encontrado o campo de assinatura no pdf!");
pdfButtonFormField
.SetImage(imagePath);
pdfButtonFormField
.SetBorderWidth(0);
pdfButtonFormField.Flush();
pdfButtonFormField.Release();
}
As you can see, I'm setting the pdfButtonFormField image through pdfButtonFormField.SetImage(imagePath). The thing is, I need to delete this file (imagePath) after using it, and it seems that itext process still using the resource, even if I call pdfButtonFormField.Flush() and also pdfButtonFormField.Release().
So you may say, "why don't you just open a filestream, and call fileStream.Dispose after using?". Because the file itself is not in my hands, its being managed by itext api.
So please, I'd like to know if theres any way to do it.
Looking at the iText 7 source code, PdfButtonFormField.SetImage does the following:
Opens a FileStream using the image path (which it does not release).
Calls an internal utility method to read the FileStream into a byte array.
Calls Convert.ToBase64String to convert the byte array into a string.
Passes the resulting string to PdfButtonFormField.SetValue.
You can do the first three steps yourself and then call SetValue on the PdfButtonFormField.
Assuming you've written your own method ReadFileToArray to read the image file and return it as an array of bytes, this should work:
public void SetImage(PdfAcroForm pdfAcroForm, string idFormField, string imagePath)
{
var pdfButtonFormField = (PdfButtonFormField) pdfAcroForm.GetField(idFormField);
if (pdfButtonFormField == null)
throw new InstanceNotFoundException();
var imageBytes = ReadFileToArray(imagePath);
var imageStr = Convert.ToBase64String(imageBytes);
pdfButtonFormField.SetValue(imageStr);
pdfButtonFormField.SetBorderWidth(0);
}
Here is a link to the source for PdfButtonFormField:
PdfButtonFormField.cs

Exporting a 3D double array to a tiff image stack in C# [duplicate]

I load a multiframe TIFF from a Stream in my C# application, and then save it using the Image.Save method. However, this only saves the TIFF with the first frame - how can I get it to save a multiframe tiff?
Since you don't provide any detailed information... just some general tips:
Multi-Frame TIFF are very complex files - for example every frame can have a different encoding... a single Bitmap/Image can't hold all frames with all relevant information (like encoding and similar) of such a file, only one at a time.
For loading you need to set parameter which tells the class which frame to load, otherwise it just loads the first... for some code see here.
Similar problems arise when saving multi-frame TIFFs - here you need to work with EncoderParameters and use SaveAdd etc. - for some working code see here.
Since the link to code provided by #Yahia is broken I have decided to post the code I ended up using.
In my case, the multi-frame TIFF already exists and all I need to do is to load the image, rotate by EXIF (if necessary) and save. I won't post the EXIF rotation code here, since it does not relate to this question.
using (Image img = System.Drawing.Image.FromStream(sourceStream))
{
using (FileStream fileStream = System.IO.File.Create(filePath))
{
int pages = img.GetFrameCount(System.Drawing.Imaging.FrameDimension.Page);
if (pages == 1)
{
img.Save(fileStream, img.RawFormat); // if there is just one page, just save the file
}
else
{
var encoder = System.Drawing.Imaging.ImageCodecInfo.GetImageEncoders().First(x => x.MimeType == fileInfo.MediaType);
var encoderParams = new System.Drawing.Imaging.EncoderParameters(1);
encoderParams.Param[0] = new System.Drawing.Imaging.EncoderParameter(System.Drawing.Imaging.Encoder.SaveFlag, Convert.ToInt32(System.Drawing.Imaging.EncoderValue.MultiFrame));
img.Save(fileStream, encoder, encoderParams); // save the first image with MultiFrame parameter
for (int f = 1; f < pages; f++)
{
img.SelectActiveFrame(FrameDimension.Page, f); // select active page (System.Drawing.Image.FromStream loads the first one by default)
encoderParams.Param[0] = new System.Drawing.Imaging.EncoderParameter(System.Drawing.Imaging.Encoder.SaveFlag, Convert.ToInt32(System.Drawing.Imaging.EncoderValue.FrameDimensionPage));
img.SaveAdd(img, encoderParams); // save add with FrameDimensionPage parameter
}
}
}
}
sourceStream is a System.IO.MemoryStream which holds the byte array of the file content
filePath is absolute path to cache directory (something like 'C:/Cache/multiframe.tiff')
fileInfo is a model holding the actual byte array, fileName, mediaType and other data

My stream keeps throwing Read/Write Timeout exceptions

I am parsing a PowerPoint presentation using Open Office SDK 2.0. At one point in the program I'm passing a stream to a method that will return an image's MD5. However, there seems to be a problem in the stream, before it even gets to my MD5 method.
Here's my code:
// Get image information here.
var blipRelId = blip.Embed;
var imagePart = (ImagePart)slidePart.GetPartById(blipRelId);
var imageFileName = imagePart.Uri.OriginalString;
var imageStream = imagePart.GetStream();
var imageMd5 = Hasher.CalculateStreamHash(imageStream);
In debug, before I let it drop into Hasher.CalculateStreamHash, I check the imageStream properties. Immediately, I see that the ReadTimeout and WriteTimeout both have similar errors:
imageStream.ReadTimeout' threw an exception of type 'System.InvalidOperationException
imageStream.WriteTimeout' threw an exception of type 'System.InvalidOperationException
Here's a picture of the properties that I"m seeing during debug, in case it helps:
This code is running over a PowerPoint presentation. I'm wondering if the fact that it's zipped (a PowerPoint presentation is basically just a zipped up file) is the reason I'm seeing those timeout errors?
UPDATE: I tried taking the stream, getting the image and converting it to a byte array and sending that to the MD5 method as a memory stream, but I still get those same errors in the Read/Write Timeout properties of the stream. Here's the code as it is now:
// Get image information here.
var blipRelId = blip.Embed;
var imagePart = (ImagePart)slidePart.GetPartById(blipRelId);
var imageFileName = imagePart.Uri.OriginalString;
var imageStream = imagePart.GetStream();
// Convert image to memory stream
var img = Image.FromStream(imageStream);
var imageMemoryStream = new MemoryStream(this.imageToByteArray(img));
var imageMd5 = Hasher.CalculateStreamHash(imageMemoryStream);
For clarity, here's the signature for the CalculateStreamHash method:
public static string CalculateStreamHash([NotNull] Stream stream)
Mischief managed! I was able to overcome this problem by using a BufferedStream and adding an overloaded method to my MD5 method that accepted a BufferedStream as a parameter:
// Get image information here.
var blipRelId = blip.Embed;
var imagePart = (ImagePart)slidePart.GetPartById(blipRelId);
var imageFileName = imagePart.Uri.OriginalString;
// Convert image to buffered stream
var imageBufferedStream = new BufferedStream(imagePart.GetStream());
var imageMd5 = Hasher.CalculateStreamHash(imageBufferedStream);
...and:
public static string CalculateStreamHash([NotNull] BufferedStream bufferedStream)

PhotoCaptureDevice in native code

On Windows Phone 8, I wish to take a camera shot in native code, but I'm blocked on the final stage not being able to extract information from IOutputStream.
in C# we code:
MemoryStream image = new MemoryStream();
MemoryStream imagePreview = new MemoryStream();
cameraCaptureSequence.Frames[0].CaptureStream = image.AsOutputStream();
cameraCaptureSequence.Frames[0].ThumbnailStream = imagePreview.AsOutputStream();
await cameraCaptureSequence.StartCaptureAsync();
from now image stream has information of captured image and I can render it.
In C++ / Cx I need to do the same thing but more until to catch the byte* of captured image, here my code:
Windows::Phone::Media::Capture::CameraCaptureSequence^ cameraCaptureSequence;
IBuffer^ image;
return concurrency::create_async([this]()
{
cameraCaptureSequence->Frames->GetAt(0)->CaptureStream = reinterpret_cast<IOutputStream^>(image);
create_task( m_camera->PrepareCaptureSequenceAsync(cameraCaptureSequence) ).wait();
create_task( cameraCaptureSequence->StartCaptureAsync() ).then([this]()
{
}
}
Starting from the most basic thing I wish to understand how to "save" into an IBuffer^ the result of captured image stream, better how to get the internal byte* buffer.
Thanks
You can access the pixel data from a captured image in Native code through the ICameraCaptureFrameNative. The object that implements the interface is obtained through COM. Once you have obtained the object, use MapBuffer() to access the BYTE * array.
Note that the pixel data obtained this way is in NV12 format, not a JPEG or RGB as one would expect.
#include <Windows.Phone.Media.Capture.Native.h>
CameraCaptureFrame^ frame = m_cameraCaptureSequence->Frames->GetAt(0);
pNativeFrame = NULL;
HRESULT hr = reinterpret_cast<IUnknown*>(frame)->QueryInterface(__uuidof(ICameraCaptureFrameNative ), (void**) &pNativeFrame);
create_task( m_camera->PrepareCaptureSequenceAsync(m_cameraCaptureSequence) ).wait();
create_task( m_cameraCaptureSequence->StartCaptureAsync() ).then([this]()
{
DWORD bufferSize =0;
BYTE * pBuffer = NULL;
pNativeFrame->MapBuffer(&bufferSize, &pBuffer); // Pixels are in pBuffer.
// Unmap() the buffer before capturing another image.
ICameraCaptureFrameNative doesn't give access to a texture containing the preview?
If you want acces data from a IBuffer look here : http://msdn.microsoft.com/en-us/library/windows/apps/dn182761.aspx
For your case, i thinks you need a class which implement IOutputStream. Maybe InMemoryRandomAccessStream ?

WriteableBitmap PixelBuffer Stream Length Too Small

I'm running into an issue where I'm trying to copy the pixel buffer for one WriteableBitmap over to another WriteableBitmap essentially giving a copy of the WriteableBitmap object. However, when I try to do this I run into an issue where the second WriteableBitmap's stream length is too short to hold all the values of the first WriteableBitmap.
I posted my code below. Keep in mind that I'm capturing the original data from a webcam. However, when I compare the "ps" object's stream size to wb1 and wb2, ps's size is much smaller than both of them. What I'm confused about is why wb2 stream size is smaller than wb1's. Thanks for any help.
private MemoryStream originalStream = new MemoryStream();
WriteableBitmap wb1 = new WriteableBitmap((int)photoBox.Width, (int)photoBox.Height);
WriteableBitmap wb2 = new WriteableBitmap((int)photoBox.Width, (int)photoBox.Height);
ImageEncodingProperties imageProperties = ImageEncodingProperties.CreateJpeg();
var ps = new InMemoryRandomAccessStream();
await mc.CapturePhotoToStreamAsync(imageProperties, ps);
await ps.FlushAsync();
ps.Seek(0);
wb1.SetSource(ps);
(wb1.PixelBuffer.AsStream()).CopyTo(originalStream); // this works
originalStream.Position = 0;
originalStream.CopyTo(wb2.PixelBuffer.AsStream()); // this line gives me the error: "Unable to expand length of this stream beyond its capacity"
Image img = new Image();
img.Source = wb2; // my hope is to treat this as it's own entity and modify this image independently of wb1 or originalStream
photoBox.Source =wb1;
Note that when you do new WriteableBitmap(w, h) and then call SetSource() to an image of a different resolution - the bitmap's size will change (it won't be the w x h passed in the constructor). It's likely that your photoBox.Width/Height are different than what your CapturePhotoToStreamAsync() call returns (I am assuming the image is captured at the default or preconfigured camera settings, while photoBox is just a control on screen).
How about just doing someting like this
ps.Seek(0);
wb1.SetSource(ps);
ps.Seek(0);
wb2.SetSource(ps);
I think you should create a writter from the PixelBuffer and use it to copy the stream.
The AsStream method should be used to read the buffer, not to write into it.
Have a look to
http://social.msdn.microsoft.com/Forums/en-NZ/winappswithcsharp/thread/2b499ac5-8bc8-4259-a144-842bd756bfe2
for a piece of code

Categories