I'm using SharpAvi's AviWriter to create AVI files. I can get any standard video player to play these AVI files. 20 frames per second with no compression. using example from this link
so that seems to be working fine.
failing to find SharpAVi's AviReader anywhere I've resorted to using AForge.Video.VFW's AVIReader but it can only show me a black screen for each frame before index 128 and then if fails to get next frame.
the example I'm using is straighforward
// instantiate AVI reader
AVIReader reader = new AVIReader( );
// open video file
reader.Open( "test.avi" );
// read the video file
while ( reader.Position - reader.Start < reader.Length )
{
// get next frame
Bitmap image = reader.GetNextFrame( );
// .. process the frame somehow or display it
}
I have my app's Build set to x86 to accommodate both these AVI apps 32 bit settings.
and using AForge.Video.VFW AVIWriter fails to write files with more than some 500 +/- frames.(video player needs to rebuild index and C#IDE "fails opening AVI file".
does SharpAVI have an AVIReader? because I haven't found one.
Related
I use Accord.Video.FFMPEG to create a video of 200 images with the H264 codec. For some reason, the video is very poor quality. Its size is less than 1MB. When choosing VideoCodec.Raw, the quality is high, but I am not happy with the huge size.
I do something like this
using (var vFWriter = new VideoFileWriter())
{
vFWriter.Open(video_name, 1920, 1080, 24, VideoCodec.H264);
for (int i = 0; i < 200; ++i)
{
var img_name_src = ...
using (Bitmap src_jpg = new Bitmap(img_name_src))
{
vFWriter.WriteVideoFrame(src_jpg);
}
}
vFWriter.Close();
}
When I run the program, messages appear:
[swscaler # 06c36d20] deprecated pixel format used, make sure you did set range correctly
[swscaler # 06e837a0] deprecated pixel format used, make sure you did set range correctly
[avi # 06c43980] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[avi # 06c43980] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
I don’t know if they affect something.
It looks like 1 frame:
This is the frame from the video:
How to fix it?
Is there any other way in C# to create a video from individual frames?
Usually, video quality is down to the bitrate which can be changed with this overload:
writer.Open(fileName, width, height, frameRate, VideoCodec, BitRate);
In the millions, the video still has artifacts on high detail frames but is mostly fine. In the billions however, artifacts disappear entirely but file size sky rockets and playback speed is affected by retrieval times from the disk.
Try experimenting with different VideoCodecs, bitrates and file types (mp4, avi, webm etc) to find a suitable balance for your project.
I'm using Nreco Video converter to create video thumbnails. Here is the C# code that I'm using.
(new NReco.VideoConverter.FFMpegConverter()).GetVideoThumbnail(fileSource, thumbNailPath, (float)0.1);
It simply works fine. The only issue being the orientation. The videos for which I'm trying to create thumbnails are recorded on a mobile app. So irrespective of whether the video is in portrait or landscape mode, the thumbnail generated is randomly in portrait or landscape mode.
Does any one know how to create a thumbnail of a video in a particular mode(landscape or portrait).
There is a rotation-parameter in video files that you can read by using various other ffmpeg wrapper libraries. Many players use it to actually rotate the screen. See here. As NReco does not support this directly, you would have to read this value with some other library and use it to rotate the jpeg in the stream.
I suggest using a ffmpeg wrapper where you can directly invoke ffmpeg process instances, as ffmpeg is able to read various properties from the file.
You can use ffmpeg for getting rotation from video metadata and apply appropriate rotation filter during thumbnail extraction. Since NReco VideoConverter is a .NET ffmpeg wrapper it also can be used for doing that:
Extract video orientation metadata from ffmpeg console (LogReceived event) with Invoke or ConvertMedia methods that actually doesn't perform any conversion. Rotation data can be matched with simple regex.
Compose FFMpeg arguments for appropriate rotatation filter (like: -vf "transpose=1" )
Extract thumbnail with ConvertMedia method that accepts extra ffmpeg command line arguments (see code snippet below)
(internally GetVideoThumbnail uses ConvertMedia method):
var thumbSettings = new ConvertSettings() {
VideoFrameCount = 1,
VideoFrameRate = 1,
MaxDuration = 1, // extract exactly 1 frame
Seek = 0, // frame seek position
CustomOutputArgs = String.Format(" -vf \"{0}\"", rotateFilter ) // rotation filter parameters
};
ffMpegConverter.ConvertMedia(inputFile1, null, thumbJpegOutputStream, "mjpeg", thumbSettings);
As result you will get video thumnail rotated according to video orientation metadata. Full code that implements all steps can be found in VideoConverter package (Rotate example).
common_video.avi
image1.jpg
image2.jpg
i want to insert these two images to the end of the video name common_video.avi programatically in c# so that the image shows for like 5 seconds after the video ends, what's the best way to achieve it? I have looked in to ffmpeg, with and without c# wrappers, but still nothing works. I keep getting errors and exceptions.
here's a piece of code i have tried
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
//using AForge;
//using AForge.Video.VFW;
using AForge.Video.FFMPEG;
using AviFile;
using System.Drawing;
namespace Ffmpeg
{
class Program
{
static void Main(string[] args)
{
var Strings = new string[] { "1.jpg", "2.jpg", "3.jpg" };
//VideoFileWriter W = new VideoFileWriter();
//W.Open("../../Out/new.avi", 1920, 1200, 1, VideoCodec.Raw, 4400);
//foreach (var S in Strings)
//{
// for (int I = 2 * 25; I > 0; I--)
// W.WriteVideoFrame((Bitmap)Image.FromFile(S));
//}
//W.Close();
//load the first image
Bitmap bitmap = (Bitmap)Image.FromFile(Strings[0]);
//create a new AVI file
AviManager aviManager =
new AviManager(#"..\..\out\new.avi", false);
//add a new video stream and one frame to the new file
VideoStream aviStream =
aviManager.AddVideoStream(false, 1, bitmap);
//Bitmap bitmap;
for (int n = 1; n < Strings.Length; n++)
{
if (Strings[n].Trim().Length > 0)
{
bitmap =
(Bitmap)Bitmap.FromFile(Strings[n]);
//for (int I = 2 * 25; I > 0; I--)
aviStream.AddFrame(bitmap);
bitmap.Dispose();
}
}
aviManager.Close();
}
}
}
Ffmpeg throws: "error configuring filters";
Not sure how the FFmpeg C API translates to C#, but adding a video stream in FmMpeg does not let you add more frames at the end of existing video stream. It creates a new video stream, therefore generating a video file with several video streams. Not only this is not what you want, but also not all muxers support muxing two video streams which may be the reason you're getting an error.
What you need could be achieved one of the following way:
Decode the original video, and reencode it. During reencoding you can easily modify the video in any way, and add images anywhere you want. This is the easiest to implement, but you'll lose the video quality due to dual reencoding if you encode into a lossy format.
Find out the exact parameters which were used to encode the original video (resolution, aspect rate, colorspace, frame rate, codec, codec parameters etc), then encode your images using exactly the same codec with the same options. Then you can copy the video stream by reading/writing frames (no reencoding), and append your newly encoded frames to this video. This will avoid reencoding of the original video, but is much trickier to implement and prone to errors, especially if the original video was encoded not by FFmpeg but a different encoder.
There are two ways:
If you don't care about performance, you can read the frames one by one, then decode each frame to raw format(yuv420), and encode the frame again to any codec you want. For your picture, just convert to yuv420, and encode into the file.
If you just want to encode your picture into the avi file: You need to know the video stream is what codec in your avi file, then encode your picture by this codec, and the first picture must be key frame. There is another problem, your codec parameter maybe won't be compatible to the origin video stream, there will be difficult. You have to parse the video stream, and find the parameter details.
I have a WAV file that has been recorded using a custom codec. The codec has been installed on my machine, and the WAV file plays fine on my machine using Windows Media player. I am using the NAudio routines to try and play the WAV file via some C# code. The values for the custom format look weird, but I have painstakingly checked them by analysing the WAV file header. Here is the best C# code I have come up with for playing the file:
WaveFormat wfOKI = WaveFormat.CreateCustomFormat(WaveFormatEncoding.DialogicOkiAdpcm, 8000, 1, 3000, 48, 3);
WaveStream wsRaw = new WaveFileReader(txtFileName.Text);
wsRaw = WaveFormatConversionStream.CreatePcmStream(wsRaw); // Line A
wsRaw = new BlockAlignReductionStream(wsRaw); // Line B
WaveStream wsOKI = new RawSourceWaveStream(wsRaw, wfOKI);
WaveOut woCall = new WaveOut();
woCall.Init(wsOKI); // <-- This line gives an error.
woCall.Play();
while (woCall.PlaybackState == PlaybackState.Playing)
{
System.Threading.Thread.Sleep(300);
}
The Init() causes the following error: An unhandled exception of type 'NAudio.MmException' occurred in NAudio.dll. Additional information: WaveBadFormat calling waveOutOpen.
Is the code the correct strategy for playing a WAV with a custom codec? I have tried all four combinations of commenting out/in Lines A and B (with no difference to the error message).
I'm using Windows 7 64-bit, Visual Studio 2010 professional (project is set to x86), and version 1.6 of NAudio. I'm very new to NAudio, but did get a few lines going that played a "standard" WAV (i.e. a file that did not use a custom codec).
If you have got a WAV file then there is no need for the RawSourceWaveStream. You can play the converted stream directly.
var wsRaw = new WaveFileReader(txtFileName.Text);
wsRaw = WaveFormatConversionStream.CreatePcmStream(wsRaw);
WaveOut woCall = new WaveOut();
woCall.Init(wsRaw);
woCall.Play();
Also, you should not be calling thread.sleep to wait for it to finish if you are using WaveOut. Try WaveOutEvent instead if you are not using this from an application with a GUI.
This example on Code Project is almost exactly what I need... except the saveFrameFromVideo takes a percentage instead of a frame number...
How can I use this to extract frame X from a WMV file?
I've also tried FFmpeg.NET... but there weren't any downloadable builds, and I couldn't get the source to build...
You can also try AsfMojo for this task, it allows you to extract an image by time offset:
Bitmap bitmap = AsfImage.FromFile(videoFileName)
.AtOffset(17.34);
Internally the Media SDK and some custom stream manipulation is used to get frame accurate still frames (up to a 100 millisecond tolerance), so if you know the frame rate of your media file (i.e. 25) you can calculate the time offset of the nearest frame:
int frameX = 400; //get 400th frame
double frameRate = 25.0;
double timeOffset = frameX / frameRate;
Bitmap bitmap = AsfImage.FromFile(videoFileName)
.AtOffset(timeOffset);
The magic is in this line:
mediaDet.WriteBitmapBits(streamLength * percentagePosition,
target.Width, target.Height, outputBitmapFile);
It's calculating the frame number from the percentage and the length of the stream. Since you already know the frame number, use that instead.
I have been working on extracting frames from webcam videos and video files- for both, i used the AForge library- (you need to add references to AForge.Video, AForge.Imaging , AForge.Video.Directshow and AForge.Video.FFMPEG). For live videos, I added a videoSourcePlayer_NewFrame(object sender, ref Bitmap image) to get the frame- Bitmap image contains the required frame in type Bitmap. This is basically the event handler for the videosource player i added in the windows form.
For video from a file, i used:
videoSource=new FileVideoSoource(fileName);
videoSource.Start();
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(videoSource_NewFrame);
videoSource_NewFrame is the event handler in case there is a new frame.