I use Accord.Video.FFMPEG to create a video of 200 images with the H264 codec. For some reason, the video is very poor quality. Its size is less than 1MB. When choosing VideoCodec.Raw, the quality is high, but I am not happy with the huge size.
I do something like this
using (var vFWriter = new VideoFileWriter())
{
vFWriter.Open(video_name, 1920, 1080, 24, VideoCodec.H264);
for (int i = 0; i < 200; ++i)
{
var img_name_src = ...
using (Bitmap src_jpg = new Bitmap(img_name_src))
{
vFWriter.WriteVideoFrame(src_jpg);
}
}
vFWriter.Close();
}
When I run the program, messages appear:
[swscaler # 06c36d20] deprecated pixel format used, make sure you did set range correctly
[swscaler # 06e837a0] deprecated pixel format used, make sure you did set range correctly
[avi # 06c43980] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[avi # 06c43980] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
I don’t know if they affect something.
It looks like 1 frame:
This is the frame from the video:
How to fix it?
Is there any other way in C# to create a video from individual frames?
Usually, video quality is down to the bitrate which can be changed with this overload:
writer.Open(fileName, width, height, frameRate, VideoCodec, BitRate);
In the millions, the video still has artifacts on high detail frames but is mostly fine. In the billions however, artifacts disappear entirely but file size sky rockets and playback speed is affected by retrieval times from the disk.
Try experimenting with different VideoCodecs, bitrates and file types (mp4, avi, webm etc) to find a suitable balance for your project.
Related
common_video.avi
image1.jpg
image2.jpg
i want to insert these two images to the end of the video name common_video.avi programatically in c# so that the image shows for like 5 seconds after the video ends, what's the best way to achieve it? I have looked in to ffmpeg, with and without c# wrappers, but still nothing works. I keep getting errors and exceptions.
here's a piece of code i have tried
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
//using AForge;
//using AForge.Video.VFW;
using AForge.Video.FFMPEG;
using AviFile;
using System.Drawing;
namespace Ffmpeg
{
class Program
{
static void Main(string[] args)
{
var Strings = new string[] { "1.jpg", "2.jpg", "3.jpg" };
//VideoFileWriter W = new VideoFileWriter();
//W.Open("../../Out/new.avi", 1920, 1200, 1, VideoCodec.Raw, 4400);
//foreach (var S in Strings)
//{
// for (int I = 2 * 25; I > 0; I--)
// W.WriteVideoFrame((Bitmap)Image.FromFile(S));
//}
//W.Close();
//load the first image
Bitmap bitmap = (Bitmap)Image.FromFile(Strings[0]);
//create a new AVI file
AviManager aviManager =
new AviManager(#"..\..\out\new.avi", false);
//add a new video stream and one frame to the new file
VideoStream aviStream =
aviManager.AddVideoStream(false, 1, bitmap);
//Bitmap bitmap;
for (int n = 1; n < Strings.Length; n++)
{
if (Strings[n].Trim().Length > 0)
{
bitmap =
(Bitmap)Bitmap.FromFile(Strings[n]);
//for (int I = 2 * 25; I > 0; I--)
aviStream.AddFrame(bitmap);
bitmap.Dispose();
}
}
aviManager.Close();
}
}
}
Ffmpeg throws: "error configuring filters";
Not sure how the FFmpeg C API translates to C#, but adding a video stream in FmMpeg does not let you add more frames at the end of existing video stream. It creates a new video stream, therefore generating a video file with several video streams. Not only this is not what you want, but also not all muxers support muxing two video streams which may be the reason you're getting an error.
What you need could be achieved one of the following way:
Decode the original video, and reencode it. During reencoding you can easily modify the video in any way, and add images anywhere you want. This is the easiest to implement, but you'll lose the video quality due to dual reencoding if you encode into a lossy format.
Find out the exact parameters which were used to encode the original video (resolution, aspect rate, colorspace, frame rate, codec, codec parameters etc), then encode your images using exactly the same codec with the same options. Then you can copy the video stream by reading/writing frames (no reencoding), and append your newly encoded frames to this video. This will avoid reencoding of the original video, but is much trickier to implement and prone to errors, especially if the original video was encoded not by FFmpeg but a different encoder.
There are two ways:
If you don't care about performance, you can read the frames one by one, then decode each frame to raw format(yuv420), and encode the frame again to any codec you want. For your picture, just convert to yuv420, and encode into the file.
If you just want to encode your picture into the avi file: You need to know the video stream is what codec in your avi file, then encode your picture by this codec, and the first picture must be key frame. There is another problem, your codec parameter maybe won't be compatible to the origin video stream, there will be difficult. You have to parse the video stream, and find the parameter details.
i am trying to implement audio recording using NAudio to a Wav file, but the default bitrate set by the WasapiLoopbackCapture class can't be changed programmatically.
I am recording the audio output to a MemoryStream (recordedStream in snippet below). However the default bitrate set by the WasapiLoobpackCapture doesn't fit my needs.
I would like to have a bit rate of 320KBPS and i tried to convert the recorded file programmatically using the WaveFormatConversionStream class, but i couldn't make it work.
WaveFormat targetFormat = WaveFormat.CreateCustomFormat(waveIn.WaveFormat.Encoding,
waveIn.WaveFormat.SampleRate, //SampleRate
waveIn.WaveFormat.Channels, //Channels
320000, //Average Bytes per Second
waveIn.WaveFormat.BlockAlign, //Block Align
waveIn.WaveFormat.BitsPerSample); //Bits per Sample
using (WaveStream inputStream = new RawSourceWaveStream(recordedStream, waveIn.WaveFormat))
{
try
{
using (var converter = new WaveFormatConversionStream(targetFormat, inputStream))
{
// ...
}
}
catch (Exception)
{
throw;
}
recordedStream.Dispose();
}
I always get an "AcmNotPossible calling acmStreamOpen" conversion exception. As you see i am using exactly the same format as the recorded WAV file (Extension encoding, 44100 etc.), except the bitrate which is lower in the target waveformat.
What would be the correct codeto do the bitrate conversion from a Wav file contained in a MemoryStream? my goal is to get a 320KBPS file.
For a given sample rate, bit depth, and channel count, PCM audio always has the same bitrate (calculated by multiplying those three values together). If you want to reduce the bitrate, you must change one of those three (lowering the sample rate is probably the best option unless you can go from stereo to mono).
Really you should be thinking of encoding to a format like MP3, WMA or AAC, which will let you select your preferred bitrate.
EDIT: I keep getting OutOfMemoryException was unhandled,
I think it's how I am saving the image to isolated storage ,I think this is where I can solve my problem how do I reduce the size of the image before I save it? (added code where I save Image)
I am opening images from Isolated storage sometimes over 100 images and I want to loop over them images but I get a OutOfMemory Exception when there is around 100 to 150 images loaded in to a storyboard. How can I handle this exception, I have already brought down the resolution of the images. How can I handle this exception and stop my app from crashing?
I get the exception at this line here
image.SetSource(isStoreTwo.OpenFile(projectFolder + "\\MyImage" + i + ".jpg", FileMode.Open, FileAccess.Read));//images from isolated storage
here's my code
private void OnLoaded(object sender, RoutedEventArgs e)
{
IsolatedStorageFile isStoreTwo = IsolatedStorageFile.GetUserStoreForApplication();
try
{
storyboard = new Storyboard
{
//RepeatBehavior = RepeatBehavior.Forever
};
var animation = new ObjectAnimationUsingKeyFrames();
Storyboard.SetTarget(animation, projectImage);
Storyboard.SetTargetProperty(animation, new PropertyPath("Source"));
storyboard.Children.Add(animation);
for (int i = 1; i <= savedCounter; i++)
{
BitmapImage image = new BitmapImage();
image.SetSource(isStoreTwo.OpenFile(projectFolder + "\\MyImage" + i + ".jpg", FileMode.Open, FileAccess.Read));//images from isolated storage
var keyframe = new DiscreteObjectKeyFrame
{
KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(100 * i)),
Value = image
};
animation.KeyFrames.Add(keyframe);
}
}
catch (OutOfMemoryException exc)
{
//throw;
}
Resources.Add("ProjectStoryBoard", storyboard);
storyboard.Begin();
}
EDIT This is how I am saving the image to Isolated storage, I think this is where I can solve my problem, How do I reduce the size of the image when saving it to isolated storage?
void cam_CaptureImageAvailable(object sender, Microsoft.Devices.ContentReadyEventArgs e)
{
string fileName = folderName+"\\MyImage" + savedCounter + ".jpg";
try
{
// Save picture to the library camera roll.
//library.SavePictureToCameraRoll(fileName, e.ImageStream);
// Set the position of the stream back to start
e.ImageStream.Seek(0, SeekOrigin.Begin);
// Save picture as JPEG to isolated storage.
using (IsolatedStorageFile isStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream targetStream = isStore.OpenFile(fileName, FileMode.Create, FileAccess.Write))
{
// Initialize the buffer for 4KB disk pages.
byte[] readBuffer = new byte[4096];
int bytesRead = -1;
// Copy the image to isolated storage.
while ((bytesRead = e.ImageStream.Read(readBuffer, 0, readBuffer.Length)) > 0)
{
targetStream.Write(readBuffer, 0, bytesRead);
}
}
}
}
finally
{
// Close image stream
e.ImageStream.Close();
}
}
I would appreciate if you could help me thanks.
It doesn't matter how large your images are on disk because when you load them into memory they're going to be uncompressed. The memory required for the image will be approximately (stride * height). stride is width * bitsPerPixel)/8, and then rounded up to the next multiple of 4 bytes. So an image that's 1024x768 and 24 bits per pixel will take up about 2.25 MB.
You should figure out how large your images are, uncompressed, and use that number to determine the memory requirements.
You are getting the OutOfMemory Exception because you are storing all the images in memory at the same time in order to create your StoryBoard. I don't think you will be able to overcome the uncompressed bitmap size that the images require to be displayed on screen.
So to get past this we must think about your goal rather than trying to fix the error. If your goal is to show a new image in sequence every X ms then you have a few options.
Keep using StoryBoards but chain them using the OnCompleted event. This way you don't have to create them all at once but can just generate the next few. It might not be fast enough though if you're changing images every 100ms.
Use CompositionTarget.Rendering as mentioned in my answer here. This would probably take the least amount of memory if you just preload the next one (as opposed to having them all preloaded as your current solution does). You'd need to manually check the elapsed time though.
Rethink what you're doing. If you state what you are going after people might have more alternatives.
To answer the edit at the top of your post, try ImageResizer. There's a NuGet package, and a HanselBlog episode on it. Obviously , this is Asp.Net based, but I'm sure you could butcher it to work in your scenario.
Tackling these kind of problems at design layer usually works better.
Making application smart about the running environment via some configurations makes your application more robust. For example you can define some variables like image size, image count, image quality... based on available memory and set these variables at run-time in your App. So your application always works; fast on high memory machines and slow on low memory ones; but never crash. (Don't believe working in managed environment means no worry about the environment... Design always matters)
Also there are some known design patterns like Lazy Loading you can benefit from.
I don't know about windows phone in particular, but in .net winforms, you need to use a separate thread when doing a long-running task. Are you using a BackgroundWorker or equivalent? The finalizer thread can become blocked, which will prevent the resources for the images from being disposed. Using a separate thread from the UI thread will allow will allow the Dispose method to be run automatically.
Ok, an image (1024x768) has at least a memsize of 3 mb (argb)
Don't know how ObjectAnimationUsingKeyFrames works internal. Maybe you can force the gc by destroying the instances of BitmapImage (and KeyFrames) without loss of its data in the animation.
(not possible, see comments!)
Based on one of your comments, you are building a Time Lapse app. Commercial time-lapse apps for WP7 compress the images to video, not stills. e.g. Time Lapse Pro
The whole point of video playback is to reduce similar, or time-related, images to highly compressed stream that do not require massive amounts of memory to play back.
If you can add the ability to encode to video, in your app, you will avoid the problem of trying to emulate a video player (using 100s of single full-resolution frames as a flick-book).
Processing the images into video server-side may be another option (but not as friendly as in-camera).
after a whole day of testing I came up with this code, which captures current screen using DirectX (SlimDX) and saves it into a file:
Device d;
public DxScreenCapture()
{
PresentParameters present_params = new PresentParameters();
present_params.Windowed = true;
present_params.SwapEffect = SwapEffect.Discard;
d = new Device(new Direct3D(), 0, DeviceType.Hardware, IntPtr.Zero, CreateFlags.SoftwareVertexProcessing, present_params);
}
public Surface CaptureScreen()
{
Surface s = Surface.CreateOffscreenPlain(d, Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
d.GetFrontBufferData(0, s);
return s;
}
Then I do the following:
DxScreenCapture sc = new DxScreenCapture();
..code here
private void button1_Click(object sender, EventArgs e)
{
Stopwatch stopwatch = new Stopwatch();
// Begin timing
stopwatch.Start();
Surface s = sc.CaptureScreen();
Surface.ToFile(s, #"c:\temp\test.png", ImageFileFormat.Png);
s.Dispose();
stopwatch.Stop();
textBox1.Text = ("Elapsed:" + stopwatch.Elapsed.TotalMilliseconds);
}
The results are:
0. when I don't save surface: avg. elapsed time: 80-90ms
1. when I also save Surface to BMP file: format: ImageFileFormat.Bmp , avg. elapsed time: 120ms, file size: 7mb
2. when I also save Surface to PNG file: format: ImageFileFormat.Png , avg. elapsed time: 800ms, file size: 300kb
The questions are:
1. Is it possible to optimise current image capture? According to this article - Directx screen capture should be faster than GDI. For me, GDI usually takes 20ms to get a "Bitmap", whereas it takes 80ms to get "Surfare" using DX (both without saving).
http://www.codeproject.com/Articles/274461/Very-fast-screen-capture-using-DirectX-in-Csharp
2a. How to save Surface to PNG image format faster? When I save surface to 7mb BMP file it takes almost 6 times less time, than when I save the same surface to 300kb PNG file..
2b. Is it possible to save Surface directly to Bitmap so I don't have to create temporary files?
So I don't have to do following: Surface -> image file; image file open -> bitmap;, but instead: Surface -> bitmap
that's all for now. I'll gladly accept any tips, thanks!
Edit:
Just solved 2b by doing:
Bitmap bitmap = new Bitmap(SlimDX.Direct3D9.Surface.ToStream(s, SlimDX.Direct3D9.ImageFileFormat.Bmp));
Edit2:
Surface.ToFile(s, #"C:\temp\test.bmp", ImageFileFormat.Bmp);
Bitmap bitmap = new Bitmap(#"C:\temp\test.bmp");
is faster than:
Bitmap bitmap = new Bitmap(SlimDX.Direct3D9.Surface.ToStream(s, SlimDX.Direct3D9.ImageFileFormat.Bmp));
by 100 ms!!! Yeah, I couldn't believe my eyes too ;) I don't like the idea of temporary file creation, but a 50% performance increase (100-200ms instead of 200-300+) is a very good thing.
If you don't want to use SlimDX library you can also try
public Bitmap GimmeBitmap(Surface s)
{
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, s);
return new Bitmap(gs);
}
and try the same for .png - I did not test performance but it have to be faster than using disc temporary file :)
and as for 1st question - try to only once create surface and then on every screenshot only put into it device's buffer data and create the bitmap
d.GetFrontBufferData(0, s);
return new Bitmap(SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, s));
this should save you some time :)
If performance really is an issue, you should consider writing your code in C++ instead. Therefor you dont need an external library but can directly access the backend-buffer of your video card via Windows-API + DirectX.
Accessing the backend(-video)-buffer is a lot faster than reading from the frontend-buffer.
To optimize performance (which also awnsers your question 1) use multithreading (see TPL or threading depending on your needs).
Here is an inside of how to do it in C++ CodeProject examples in C++.
From my personal experience, DirectX was by far the fastest.
These steps
1. reading backend-buffer into a bitmap to process the data
2. spawning new thread to repeat step 1 while previous thread is still busy
take about 10-40ms (together) - implemented in C++ (on NVIDIA GeForce GTX 970M) and depending on the current workload of the hardware
Possible middle course
If you want to stick with C# but also need the performance, writing a C++-dll for .NET (see .NET Programming with C++/CLI (Visual C++)) which reads the video buffer and returns the data to your C#-Code will do the trick.
I am using the following code to leverage Windows Media Encoder to record screen. I am using Windows Vista, screen resolution 1024 × 768, 32-bit. My issue is, the video could be recorded successfully, but when I playback the recorded video, the quality of video is not very good -- e.g. characters are very obscure. I am wondering what are the parameters I should try to tune to get better quality of recorder video?
My code,
static WMEncoder encoder = new WMEncoder();
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
IWMEncFile File = encoder.File;
File.LocalFileName = "C:\\OutputFile.avi";
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Windows Media Video 8 for Local Area Network (384 Kbps)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
Video encoders use a certain kbit/second ratio to limit the size of the generated stream. The fewer kbits/sec the less detail you will get due to fewer coefficients from the DCT and bigger quantization values. In other words: the more kbits/sec you put into the video the more detail can be stored in the stream by the encoder.
Judging by your code you have chosen a profile which uses 384 kbit/s which is not very much for a 1024*768 video. You should try other profiles or set bitrate you want yourself.