I am trying to use Unity's NatCorder asset to get frames of a video. Is this possible? I can record video using NatCorder using:
WebCamTexture webCam;
webCam.Play();
NatCorder.StartRecording(Configuration.Screen, OnVideo);
//Note: OnVideo is a separate function that natcorder calls after taking the video
and then in my update function:
if (NatCorder.IsRecording && cameraTexture.didUpdateThisFrame) {
// Acquire an encoder frame from NatCorder
var frame = NatCorder.AcquireFrame();
// Blit the current camera preview frame to the encoder frame
Graphics.Blit(cameraTexture, frame);
// Commit the frame to NatCorder for encoding
NatCorder.CommitFrame(frame);
}
but my goal is to take each frame and encode it to a png or jpg file instead of creating a video. Is this possible, or is there a better way to achieve this?
Related
I'm using SharpAvi's AviWriter to create AVI files. I can get any standard video player to play these AVI files. 20 frames per second with no compression. using example from this link
so that seems to be working fine.
failing to find SharpAVi's AviReader anywhere I've resorted to using AForge.Video.VFW's AVIReader but it can only show me a black screen for each frame before index 128 and then if fails to get next frame.
the example I'm using is straighforward
// instantiate AVI reader
AVIReader reader = new AVIReader( );
// open video file
reader.Open( "test.avi" );
// read the video file
while ( reader.Position - reader.Start < reader.Length )
{
// get next frame
Bitmap image = reader.GetNextFrame( );
// .. process the frame somehow or display it
}
I have my app's Build set to x86 to accommodate both these AVI apps 32 bit settings.
and using AForge.Video.VFW AVIWriter fails to write files with more than some 500 +/- frames.(video player needs to rebuild index and C#IDE "fails opening AVI file".
does SharpAVI have an AVIReader? because I haven't found one.
I have a code that receives a video stream through the webrtc library, which in its function shows them in a PictureBox, my question is .. how to pass that stream from the PictureBoxto a video on my computer?
public unsafe void OnRenderRemote(byte* yuv, uint w, uint h)
{
lock (pictureBoxRemote)
{
if (0 == encoderRemote.EncodeI420toBGR24(yuv, w, h, ref bgrBuffremote, true))
{
if (remoteImg == null)
{
var bufHandle = GCHandle.Alloc(bgrBuffremote, GCHandleType.Pinned);
remoteImg = new Bitmap((int)w, (int)h, (int)w * 3, PixelFormat.Format24bppRgb, bufHandle.AddrOfPinnedObject());
}
}
}
try
{
Invoke(renderRemote, this);
}
catch // don't throw on form exit
{
}
}
This code receives the stream through webrtc and converts it into images that are then shown in a PictureBoxcalling this function .. my question is:
How can I save an array or buffer of remoteImg images so I can write it to a video file on my pc?
Try doing something like this:
FileWriter.Open ("C:\\Users\\assa\\record.avi", (int) w, (int) h, (int) w * 3, VideoCodec.Default, 5000000);
FileWriter.WriteVideoFrame (remoteImg);
but only saves a single capture and not a video, is there any way to save the images of the stream with the OnRenderRemote function (described above) to be able to save them in a video?
OnRenderRemote only updates the PictureBox every time it is called, but I do not know how to save that flow in a video.
Thanks.
First: i do not know how the webrtc works exactly, but i can explain you how you must process the images to save them into a file.
Ok lets start: You currently have only full sized bitmaps of your own that are coming from the lib. That is just fine as long as you do not care about file size and you only want to show the "latest" frame. To store multiple frames into a file that we would call a "video" you need an encoder that processes those frames together.
Complicated things simple: An encoder takes 2 frames, call them Frame A and B and then compresses them in a way that only changes from Frame A to frame B are saved. This saves a lot of storage because in a video we only want to see "changes" aka movments from one frame to another. There are quite a lot of encoders out there but mostly you can see ffmpeg out there, its very popular and there are quite a lot c# wrappers for it so take a look.
Summery: to make 2-x images a "video" you have the process them with an encoder that processes the images in a format that can be played by a video player.
I'm working with c# and OpenCV. I have a Bitmap that I want to write as a frame of video using the VideoWriter provided by OpenCV. I've done this in Python so know it will work. I just need the conversion step from Bitmap to Mat.
My (partial) code looks roughly like this...
VideoWriter video = new VideoWriter(filename, fps, frameSize, false);
Bitmap image = SomethingReturningABitmap();
// NEED CONVERT FROM Bitmap to Mat
Mat frame = new Mat();
video.Write(frame);
I'm using the Bitmap Converter OpenCV Extensions to convert my Bitmap to a Mat object in memory.
PSTK ps = new PSTK();
Image img = ps.CaptureScreen();
Bitmap bmpScreenshot = new Bitmap(img);
image = BitmapConverter.ToMat(bmpScreenshot);
One could also do the below but then you will incur significant overhead for writing and reading the data from disk.
myBitmap.Save("TempFile.PNG");
Mat myMatImage = CvInvoke.Imread("TempFile.PNG");
I wanted to see what the difference looked like, roughly. I was processing a 1920x1080 screen cast of my desktop for 500 frames using each method and here's what I found.
On average, for the Bitmap.Save/Cv2.ImRead method, it took 0.1403 seconds per frame.
On average, for the Bitmap Converter method, it took 0.09604 seconds per frame. It takes roughly 50% longer to save and re-read the file using my home desktop machine. Core I7 2nd gen, 16GB RAM.
I'm using Nreco Video converter to create video thumbnails. Here is the C# code that I'm using.
(new NReco.VideoConverter.FFMpegConverter()).GetVideoThumbnail(fileSource, thumbNailPath, (float)0.1);
It simply works fine. The only issue being the orientation. The videos for which I'm trying to create thumbnails are recorded on a mobile app. So irrespective of whether the video is in portrait or landscape mode, the thumbnail generated is randomly in portrait or landscape mode.
Does any one know how to create a thumbnail of a video in a particular mode(landscape or portrait).
There is a rotation-parameter in video files that you can read by using various other ffmpeg wrapper libraries. Many players use it to actually rotate the screen. See here. As NReco does not support this directly, you would have to read this value with some other library and use it to rotate the jpeg in the stream.
I suggest using a ffmpeg wrapper where you can directly invoke ffmpeg process instances, as ffmpeg is able to read various properties from the file.
You can use ffmpeg for getting rotation from video metadata and apply appropriate rotation filter during thumbnail extraction. Since NReco VideoConverter is a .NET ffmpeg wrapper it also can be used for doing that:
Extract video orientation metadata from ffmpeg console (LogReceived event) with Invoke or ConvertMedia methods that actually doesn't perform any conversion. Rotation data can be matched with simple regex.
Compose FFMpeg arguments for appropriate rotatation filter (like: -vf "transpose=1" )
Extract thumbnail with ConvertMedia method that accepts extra ffmpeg command line arguments (see code snippet below)
(internally GetVideoThumbnail uses ConvertMedia method):
var thumbSettings = new ConvertSettings() {
VideoFrameCount = 1,
VideoFrameRate = 1,
MaxDuration = 1, // extract exactly 1 frame
Seek = 0, // frame seek position
CustomOutputArgs = String.Format(" -vf \"{0}\"", rotateFilter ) // rotation filter parameters
};
ffMpegConverter.ConvertMedia(inputFile1, null, thumbJpegOutputStream, "mjpeg", thumbSettings);
As result you will get video thumnail rotated according to video orientation metadata. Full code that implements all steps can be found in VideoConverter package (Rotate example).
This example on Code Project is almost exactly what I need... except the saveFrameFromVideo takes a percentage instead of a frame number...
How can I use this to extract frame X from a WMV file?
I've also tried FFmpeg.NET... but there weren't any downloadable builds, and I couldn't get the source to build...
You can also try AsfMojo for this task, it allows you to extract an image by time offset:
Bitmap bitmap = AsfImage.FromFile(videoFileName)
.AtOffset(17.34);
Internally the Media SDK and some custom stream manipulation is used to get frame accurate still frames (up to a 100 millisecond tolerance), so if you know the frame rate of your media file (i.e. 25) you can calculate the time offset of the nearest frame:
int frameX = 400; //get 400th frame
double frameRate = 25.0;
double timeOffset = frameX / frameRate;
Bitmap bitmap = AsfImage.FromFile(videoFileName)
.AtOffset(timeOffset);
The magic is in this line:
mediaDet.WriteBitmapBits(streamLength * percentagePosition,
target.Width, target.Height, outputBitmapFile);
It's calculating the frame number from the percentage and the length of the stream. Since you already know the frame number, use that instead.
I have been working on extracting frames from webcam videos and video files- for both, i used the AForge library- (you need to add references to AForge.Video, AForge.Imaging , AForge.Video.Directshow and AForge.Video.FFMPEG). For live videos, I added a videoSourcePlayer_NewFrame(object sender, ref Bitmap image) to get the frame- Bitmap image contains the required frame in type Bitmap. This is basically the event handler for the videosource player i added in the windows form.
For video from a file, i used:
videoSource=new FileVideoSoource(fileName);
videoSource.Start();
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(videoSource_NewFrame);
videoSource_NewFrame is the event handler in case there is a new frame.