common_video.avi
image1.jpg
image2.jpg
i want to insert these two images to the end of the video name common_video.avi programatically in c# so that the image shows for like 5 seconds after the video ends, what's the best way to achieve it? I have looked in to ffmpeg, with and without c# wrappers, but still nothing works. I keep getting errors and exceptions.
here's a piece of code i have tried
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
//using AForge;
//using AForge.Video.VFW;
using AForge.Video.FFMPEG;
using AviFile;
using System.Drawing;
namespace Ffmpeg
{
class Program
{
static void Main(string[] args)
{
var Strings = new string[] { "1.jpg", "2.jpg", "3.jpg" };
//VideoFileWriter W = new VideoFileWriter();
//W.Open("../../Out/new.avi", 1920, 1200, 1, VideoCodec.Raw, 4400);
//foreach (var S in Strings)
//{
// for (int I = 2 * 25; I > 0; I--)
// W.WriteVideoFrame((Bitmap)Image.FromFile(S));
//}
//W.Close();
//load the first image
Bitmap bitmap = (Bitmap)Image.FromFile(Strings[0]);
//create a new AVI file
AviManager aviManager =
new AviManager(#"..\..\out\new.avi", false);
//add a new video stream and one frame to the new file
VideoStream aviStream =
aviManager.AddVideoStream(false, 1, bitmap);
//Bitmap bitmap;
for (int n = 1; n < Strings.Length; n++)
{
if (Strings[n].Trim().Length > 0)
{
bitmap =
(Bitmap)Bitmap.FromFile(Strings[n]);
//for (int I = 2 * 25; I > 0; I--)
aviStream.AddFrame(bitmap);
bitmap.Dispose();
}
}
aviManager.Close();
}
}
}
Ffmpeg throws: "error configuring filters";
Not sure how the FFmpeg C API translates to C#, but adding a video stream in FmMpeg does not let you add more frames at the end of existing video stream. It creates a new video stream, therefore generating a video file with several video streams. Not only this is not what you want, but also not all muxers support muxing two video streams which may be the reason you're getting an error.
What you need could be achieved one of the following way:
Decode the original video, and reencode it. During reencoding you can easily modify the video in any way, and add images anywhere you want. This is the easiest to implement, but you'll lose the video quality due to dual reencoding if you encode into a lossy format.
Find out the exact parameters which were used to encode the original video (resolution, aspect rate, colorspace, frame rate, codec, codec parameters etc), then encode your images using exactly the same codec with the same options. Then you can copy the video stream by reading/writing frames (no reencoding), and append your newly encoded frames to this video. This will avoid reencoding of the original video, but is much trickier to implement and prone to errors, especially if the original video was encoded not by FFmpeg but a different encoder.
There are two ways:
If you don't care about performance, you can read the frames one by one, then decode each frame to raw format(yuv420), and encode the frame again to any codec you want. For your picture, just convert to yuv420, and encode into the file.
If you just want to encode your picture into the avi file: You need to know the video stream is what codec in your avi file, then encode your picture by this codec, and the first picture must be key frame. There is another problem, your codec parameter maybe won't be compatible to the origin video stream, there will be difficult. You have to parse the video stream, and find the parameter details.
Related
In my app I am recording voice using AudioRecorder as given in the following site, Audio Recorder it is working but it produce large size WAV file.
For example : If I record audio for 1 minute it takes 4MB to 5MB. So that I want to convert the wave file into MP3 file to reduce the size of the file. Please help me to compress the wav file ,give some example. Thanks in advance.
I never tried converting files before so i looked up on
some threads that might be helpful to you.
One is converting wav to mp3 which require file conversion into a byte[]
public byte[] ConvertToMp3(Uri uri)
{
using (var client = new WebClient())
{
var file = client.DownloadData(uri);
var target = new WaveFormat(8000, 16, 1);
using (var outPutStream = new MemoryStream())
using (var waveStream = new WaveFileReader(new MemoryStream(file)))
using (var conversionStream = new WaveFormatConversionStream(target, waveStream))
using (var writer = new LameMP3FileWriter(outPutStream, conversionStream.WaveFormat, 32, null))
{
conversionStream.CopyTo(writer);
return outPutStream.ToArray();
}
}
}
however on this method he is using a third party service which downloads the
wav file and then to be called on that method but this does not guaranty if the file size will be reduced.
however i have check that you can compress wav files using a library called zlib.
just decompress it whenever u need it.
Please check the link below:
How to convert wav file to mp3 in memory?
Reducing WAV sound file size, without losing quality
I'm using SharpAvi's AviWriter to create AVI files. I can get any standard video player to play these AVI files. 20 frames per second with no compression. using example from this link
so that seems to be working fine.
failing to find SharpAVi's AviReader anywhere I've resorted to using AForge.Video.VFW's AVIReader but it can only show me a black screen for each frame before index 128 and then if fails to get next frame.
the example I'm using is straighforward
// instantiate AVI reader
AVIReader reader = new AVIReader( );
// open video file
reader.Open( "test.avi" );
// read the video file
while ( reader.Position - reader.Start < reader.Length )
{
// get next frame
Bitmap image = reader.GetNextFrame( );
// .. process the frame somehow or display it
}
I have my app's Build set to x86 to accommodate both these AVI apps 32 bit settings.
and using AForge.Video.VFW AVIWriter fails to write files with more than some 500 +/- frames.(video player needs to rebuild index and C#IDE "fails opening AVI file".
does SharpAVI have an AVIReader? because I haven't found one.
I use Accord.Video.FFMPEG to create a video of 200 images with the H264 codec. For some reason, the video is very poor quality. Its size is less than 1MB. When choosing VideoCodec.Raw, the quality is high, but I am not happy with the huge size.
I do something like this
using (var vFWriter = new VideoFileWriter())
{
vFWriter.Open(video_name, 1920, 1080, 24, VideoCodec.H264);
for (int i = 0; i < 200; ++i)
{
var img_name_src = ...
using (Bitmap src_jpg = new Bitmap(img_name_src))
{
vFWriter.WriteVideoFrame(src_jpg);
}
}
vFWriter.Close();
}
When I run the program, messages appear:
[swscaler # 06c36d20] deprecated pixel format used, make sure you did set range correctly
[swscaler # 06e837a0] deprecated pixel format used, make sure you did set range correctly
[avi # 06c43980] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[avi # 06c43980] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
I don’t know if they affect something.
It looks like 1 frame:
This is the frame from the video:
How to fix it?
Is there any other way in C# to create a video from individual frames?
Usually, video quality is down to the bitrate which can be changed with this overload:
writer.Open(fileName, width, height, frameRate, VideoCodec, BitRate);
In the millions, the video still has artifacts on high detail frames but is mostly fine. In the billions however, artifacts disappear entirely but file size sky rockets and playback speed is affected by retrieval times from the disk.
Try experimenting with different VideoCodecs, bitrates and file types (mp4, avi, webm etc) to find a suitable balance for your project.
I'm developing a WPF application, where I have to play audio. I receive the audio data in .mp4 format (in a byte array) and the only restriction is that I can't write it out to the hard disk.
I found couple of solutions for playing the .mp4 format, for example with WMPLib.WindowsMediaPlayer, but I can't give a byte array, or stream to this library, to play the audio. It just accepts the file path.
Then I found the System.Media.SoundPlayer, which can play audio from a stream, but just in .wav format. I started to search for solutions to convert from mp4 to wav. I found the NAudio library and I could make the conversion the following way:
using (var data = new MediaFoundationReader(filePath)) {
var stream = new MemoryStream();
WaveFileWriter.WriteWavFileToStream(stream, data);
}
The problem with this is that I can instantiate the MediaFoundationReader just with a file path parameter. I didn't find any way to create it without using files. I think this was also a dead end.
So, any suggestion would be helpful about how can I convert audio in memory, or maybe how can I play directly the .mp4 file from a byte array or stream?
You can convert any audio formats witn NAudio
See samples like :How to convert a MP3 file to WAV with NAudio in WinForms C#
with several methods like MediaFoundationReader
Finally I found a solution which converts to an .mp3 format, but it can also convert to .wav. I could use the uwp transcode API the following way:
public static async void ConvertMp4ToMp3(byte[] mp4Data, Action<Stream> doneCallback) {
MediaEncodingProfile profile = MediaEncodingProfile.CreateMp3(AudioEncodingQuality.High);
var inputStream = new MemoryRandomAccessStream(mp4Data);
var outputStream = new InMemoryRandomAccessStream();
MediaTranscoder transcoder = new MediaTranscoder();
PrepareTranscodeResult prepareOperation = await transcoder.PrepareStreamTranscodeAsync(inputStream, outputStream, profile);
if (prepareOperation.CanTranscode) {
//start to convert
var transcodeOperation = prepareOperation.TranscodeAsync();
//registers completed event handler
transcodeOperation.Completed += (IAsyncActionWithProgress<double> asyncInfo, AsyncStatus status) => {
asyncInfo.GetResults();
var stream = outputStream.AsStream();
stream.Position = 0;
doneCallback(stream);
};
} else {
doneCallback(null);
}
}
The imports:
using System;
using System.IO;
using Windows.Foundation;
using Windows.Media.MediaProperties;
using Windows.Media.Transcoding;
using Windows.Storage.Streams;
And the MemoryRandomAccessStream is just an implementation of the IRandomAccesStream interface and can be found here.
Right now I am using ghostscript in Unity to convert pdfs to jpgs and view them in my project.
Currently it flows like so:
-Pdfs are converted into multiple jpegs (one for each page)
-The converted jpegs are written to disk
-They are then read in by bytes into a 2D texture
-And this 2D texture is assigned to a GameObjects RawImage component
This works perfectly in Unity, but... (now comes the hiccup) my project is intended to run on the Microsoft Hololens.
The Hololens runs on the Windows 10 API, but in a limited capacity.
Where the issue arises is when I try to convert pdfs and view them on the Hololens. Quite simply, the Hololens cannot create or delete files outside of its known folders (Pictures, Documents, etc).
My imagined solution to this problem is to instead of write the converted jpeg files to disk, write them to memory and view them from there.
In talking with GhostScript devs, I was told GhostScript.NET does what I am looking to do - convert pdfs and view them from memory (It does this with the Rasterizer/Viewer classes, I believe, but again I don't understand it quite well).
I've been lead to look at the latest GhostScript.NET docs to route out how this is done, but I simply don't understand them well enough to approach this.
My question is then, based on how I'm using ghostscript now, how do I use GhostScript.NET in my project to write the converted jpegs into memory and view them there?
Here's how I'm doing it now (code-wise):
//instantiate
byte[] fileData;
Texture2D tex = null;
//if a PDF file exists at the current head path
if (File.Exists(CurrentHeadPath))
{
//Transform pdf to jpg
PdfToImage.PDFConvert pp = new PDFConvert();
pp.OutputFormat = "jpeg"; //format
pp.JPEGQuality = 100; //100% quality
pp.ResolutionX = 300; //dpi
pp.ResolutionY = 500;
pp.OutputToMultipleFile = true;
CurrentPDFPath = "Data/myFiles/pdfconvimg.jpg";
//this call is what actually converts the pdf to jpeg files
pp.Convert(CurrentHeadPath, CurrentPDFPath);
//this just loads the first image
if (File.Exists("Data/myFiles/pdfconvimg" + 1 + ".jpg"))
{
//reads in the jpeg file by bytes
fileData = File.ReadAllBytes("Data/myFiles/pdfconvimg" + 1 + ".jpg");
tex = new Texture2D(2, 2);
tex.LoadImage(fileData); //..this will auto-resize the texture dimensions.
//Read Texture into RawImage component
PdfObject.GetComponent<RawImage>().texture = tex;
PdfObject.GetComponent<RawImage>().rectTransform.sizeDelta = new Vector2(288, 400);
PdfObject.GetComponent<RawImage>().enabled = true;
}
else
{
Debug.Log("reached eof");
}
}
The convert function is from a script called PDFConvert which I obtained from code project. Specifically How To Convert PDF to Image Using Ghostscript API.
From the GhostScript.Net documentation, take a look at the example code labeled: "Using GhostscriptRasterizer class". Specifically the following lines:
Image img = _rasterizer.GetPage(desired_x_dpi, desired_y_dpi, pageNumber);
img.Save(pageFilePath, ImageFormat.Png);
The Image class seems to be part of the System.Drawing package, and System.Drawing.Image has another Save method where the first parameter is a System.IO.Stream.