This example on Code Project is almost exactly what I need... except the saveFrameFromVideo takes a percentage instead of a frame number...
How can I use this to extract frame X from a WMV file?
I've also tried FFmpeg.NET... but there weren't any downloadable builds, and I couldn't get the source to build...
You can also try AsfMojo for this task, it allows you to extract an image by time offset:
Bitmap bitmap = AsfImage.FromFile(videoFileName)
.AtOffset(17.34);
Internally the Media SDK and some custom stream manipulation is used to get frame accurate still frames (up to a 100 millisecond tolerance), so if you know the frame rate of your media file (i.e. 25) you can calculate the time offset of the nearest frame:
int frameX = 400; //get 400th frame
double frameRate = 25.0;
double timeOffset = frameX / frameRate;
Bitmap bitmap = AsfImage.FromFile(videoFileName)
.AtOffset(timeOffset);
The magic is in this line:
mediaDet.WriteBitmapBits(streamLength * percentagePosition,
target.Width, target.Height, outputBitmapFile);
It's calculating the frame number from the percentage and the length of the stream. Since you already know the frame number, use that instead.
I have been working on extracting frames from webcam videos and video files- for both, i used the AForge library- (you need to add references to AForge.Video, AForge.Imaging , AForge.Video.Directshow and AForge.Video.FFMPEG). For live videos, I added a videoSourcePlayer_NewFrame(object sender, ref Bitmap image) to get the frame- Bitmap image contains the required frame in type Bitmap. This is basically the event handler for the videosource player i added in the windows form.
For video from a file, i used:
videoSource=new FileVideoSoource(fileName);
videoSource.Start();
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(videoSource_NewFrame);
videoSource_NewFrame is the event handler in case there is a new frame.
Related
I'm using SharpAvi's AviWriter to create AVI files. I can get any standard video player to play these AVI files. 20 frames per second with no compression. using example from this link
so that seems to be working fine.
failing to find SharpAVi's AviReader anywhere I've resorted to using AForge.Video.VFW's AVIReader but it can only show me a black screen for each frame before index 128 and then if fails to get next frame.
the example I'm using is straighforward
// instantiate AVI reader
AVIReader reader = new AVIReader( );
// open video file
reader.Open( "test.avi" );
// read the video file
while ( reader.Position - reader.Start < reader.Length )
{
// get next frame
Bitmap image = reader.GetNextFrame( );
// .. process the frame somehow or display it
}
I have my app's Build set to x86 to accommodate both these AVI apps 32 bit settings.
and using AForge.Video.VFW AVIWriter fails to write files with more than some 500 +/- frames.(video player needs to rebuild index and C#IDE "fails opening AVI file".
does SharpAVI have an AVIReader? because I haven't found one.
I use Accord.Video.FFMPEG to create a video of 200 images with the H264 codec. For some reason, the video is very poor quality. Its size is less than 1MB. When choosing VideoCodec.Raw, the quality is high, but I am not happy with the huge size.
I do something like this
using (var vFWriter = new VideoFileWriter())
{
vFWriter.Open(video_name, 1920, 1080, 24, VideoCodec.H264);
for (int i = 0; i < 200; ++i)
{
var img_name_src = ...
using (Bitmap src_jpg = new Bitmap(img_name_src))
{
vFWriter.WriteVideoFrame(src_jpg);
}
}
vFWriter.Close();
}
When I run the program, messages appear:
[swscaler # 06c36d20] deprecated pixel format used, make sure you did set range correctly
[swscaler # 06e837a0] deprecated pixel format used, make sure you did set range correctly
[avi # 06c43980] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[avi # 06c43980] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
I don’t know if they affect something.
It looks like 1 frame:
This is the frame from the video:
How to fix it?
Is there any other way in C# to create a video from individual frames?
Usually, video quality is down to the bitrate which can be changed with this overload:
writer.Open(fileName, width, height, frameRate, VideoCodec, BitRate);
In the millions, the video still has artifacts on high detail frames but is mostly fine. In the billions however, artifacts disappear entirely but file size sky rockets and playback speed is affected by retrieval times from the disk.
Try experimenting with different VideoCodecs, bitrates and file types (mp4, avi, webm etc) to find a suitable balance for your project.
I have a code that receives a video stream through the webrtc library, which in its function shows them in a PictureBox, my question is .. how to pass that stream from the PictureBoxto a video on my computer?
public unsafe void OnRenderRemote(byte* yuv, uint w, uint h)
{
lock (pictureBoxRemote)
{
if (0 == encoderRemote.EncodeI420toBGR24(yuv, w, h, ref bgrBuffremote, true))
{
if (remoteImg == null)
{
var bufHandle = GCHandle.Alloc(bgrBuffremote, GCHandleType.Pinned);
remoteImg = new Bitmap((int)w, (int)h, (int)w * 3, PixelFormat.Format24bppRgb, bufHandle.AddrOfPinnedObject());
}
}
}
try
{
Invoke(renderRemote, this);
}
catch // don't throw on form exit
{
}
}
This code receives the stream through webrtc and converts it into images that are then shown in a PictureBoxcalling this function .. my question is:
How can I save an array or buffer of remoteImg images so I can write it to a video file on my pc?
Try doing something like this:
FileWriter.Open ("C:\\Users\\assa\\record.avi", (int) w, (int) h, (int) w * 3, VideoCodec.Default, 5000000);
FileWriter.WriteVideoFrame (remoteImg);
but only saves a single capture and not a video, is there any way to save the images of the stream with the OnRenderRemote function (described above) to be able to save them in a video?
OnRenderRemote only updates the PictureBox every time it is called, but I do not know how to save that flow in a video.
Thanks.
First: i do not know how the webrtc works exactly, but i can explain you how you must process the images to save them into a file.
Ok lets start: You currently have only full sized bitmaps of your own that are coming from the lib. That is just fine as long as you do not care about file size and you only want to show the "latest" frame. To store multiple frames into a file that we would call a "video" you need an encoder that processes those frames together.
Complicated things simple: An encoder takes 2 frames, call them Frame A and B and then compresses them in a way that only changes from Frame A to frame B are saved. This saves a lot of storage because in a video we only want to see "changes" aka movments from one frame to another. There are quite a lot of encoders out there but mostly you can see ffmpeg out there, its very popular and there are quite a lot c# wrappers for it so take a look.
Summery: to make 2-x images a "video" you have the process them with an encoder that processes the images in a format that can be played by a video player.
I am trying to use Unity's NatCorder asset to get frames of a video. Is this possible? I can record video using NatCorder using:
WebCamTexture webCam;
webCam.Play();
NatCorder.StartRecording(Configuration.Screen, OnVideo);
//Note: OnVideo is a separate function that natcorder calls after taking the video
and then in my update function:
if (NatCorder.IsRecording && cameraTexture.didUpdateThisFrame) {
// Acquire an encoder frame from NatCorder
var frame = NatCorder.AcquireFrame();
// Blit the current camera preview frame to the encoder frame
Graphics.Blit(cameraTexture, frame);
// Commit the frame to NatCorder for encoding
NatCorder.CommitFrame(frame);
}
but my goal is to take each frame and encode it to a png or jpg file instead of creating a video. Is this possible, or is there a better way to achieve this?
I'm working with c# and OpenCV. I have a Bitmap that I want to write as a frame of video using the VideoWriter provided by OpenCV. I've done this in Python so know it will work. I just need the conversion step from Bitmap to Mat.
My (partial) code looks roughly like this...
VideoWriter video = new VideoWriter(filename, fps, frameSize, false);
Bitmap image = SomethingReturningABitmap();
// NEED CONVERT FROM Bitmap to Mat
Mat frame = new Mat();
video.Write(frame);
I'm using the Bitmap Converter OpenCV Extensions to convert my Bitmap to a Mat object in memory.
PSTK ps = new PSTK();
Image img = ps.CaptureScreen();
Bitmap bmpScreenshot = new Bitmap(img);
image = BitmapConverter.ToMat(bmpScreenshot);
One could also do the below but then you will incur significant overhead for writing and reading the data from disk.
myBitmap.Save("TempFile.PNG");
Mat myMatImage = CvInvoke.Imread("TempFile.PNG");
I wanted to see what the difference looked like, roughly. I was processing a 1920x1080 screen cast of my desktop for 500 frames using each method and here's what I found.
On average, for the Bitmap.Save/Cv2.ImRead method, it took 0.1403 seconds per frame.
On average, for the Bitmap Converter method, it took 0.09604 seconds per frame. It takes roughly 50% longer to save and re-read the file using my home desktop machine. Core I7 2nd gen, 16GB RAM.