First:
well i'm tired of asking the same question, i do know that i can ask about Coding techniques, not just specific problematic points.
Background:
my project is to make a Desktop Recorder, i tried WM Encoder it works but double click don't work at most cases "ex: can't open My Computer! i have to press Enter!", i made a search and it turned out that WM Encoder is the problem and it's matter of waiting a new version to solve this problem of double click.
Now:
this is my previous questions relate to my problem
Re size and compress jpeg
sort files from folder in String[] Array
create a video from a list of JPEG files
Combine images into a movie
So:
How to save list of images in one video using ffmpeg in step by step?
i got this ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi but i have no idea how to use it in my project or where i m going to put this code.
Related
I am trying to make a real time video recorder with ffmpeg (not screen recorder, unity player recorder of Specific dimensions)
I am able to get the ARGB data, and so far I have been writing it to a bunch of bmps and then running ffmpegs concat command like
ffmpeg -i files.txt -i pictures/pic%06d.bmp output.mp4
With different codecs etc, and my files.txt is essentially (pseudo)
ffconcat version 1.0 file pic000000.bmp duration 0.016 #etc, basically the durations were generated from time stamps
Anyways that all works believe it or not, but writing the files to disk as bmp (or even encoding them as a compressed format then writing that to disk) takes up a lot of extra time and I would prefer to pipe the data directly to ffmpeg
I know in some cases you can input a file by using the - operator then in whatever programming language the prices was started from pass on the byte data though stdin I am pretty sure, although the problem:
I have only been able to find out how to do this with a set framerate, but not with the concat, and I (think?) I need to use concat here because it's very important that the images have an exact time stamp on the body to line up with audio, as there will be a slight delay when capturing the frames, and so far I have been calculatabling each frames duration based on their timestamps (and the last one has no duration), in order to line them up perfectly with the audio, but as far as I can find the concat feature seems to require the files to already be written to the disk and then specified in a text file..
So is there any way to get a custom frame rate for each frame without writing the frames to disk first, and just piping them in? Does concat in any way support -? Is there another way I can line up the frames with audio? Do other video recording softwares face similar issues?
Pipe
I don't have access to your image/video/data generator, so I can't tell what it is doing, but you can at least try a pipe:
your_data_process - | ffmpeg -f rawvideo -framerate 25 -pixel_format argb -video_size 640x480 -i - output.mp4
your_data_process in this example is just a placeholder example for whatever is generating the video.
-f should be whatever format your_data_process is outputting. See ffmpeg -demuxers for a long list. This example assumes raw video.
A frame rate must be set but it does not necessarily mean it will mess up your timing.
If your player does like the output then add the -vf format=yuv420p output option.
Image
Another approach is to output to a single image file and atomically overwrite the file for each change. ffmpeg will then use the file as an input and will notice when it gets updated.
ffmpeg -f image2 -loop 1 -framerate 25 -i input.bmp output.mp4
Usually you don't have to manually set -f image2 for most image inputs. You have to in this case or it won't notice the image update. This is because it will select a simpler demuxer for some image formats.
In this example ffmpeg will duplicate frames to fill 25 fps until the next image update so the timing should be preserved.
If your player does like the output then add the -vf format=yuv420p output option.
I have tried something along the lines of
C:\ffmpeg\ffmpeg -i "Blip_Select2.wav" -c:a wav -sample_fmt u8 "Blip_Select2_8bit.wav"
but I cannot figure out how to access a 4bit conversion.
I am using the audio for a c# project, 4 bit is sufficient and I prefer wav so I won't have to distribute a possibly restricted usage decoder with my files.
Just a heads up that -b 4 with sox seems to use MS ADPCM codec which encodes the difference between samples using 4 bits. If you want to produce a similar result using ffmpeg you can use:
ffmpeg -i sound.wav -codec:a adpcm_ms -f wav sound_4bit.wav
Ok, so I did manage to find a solution for this. It's a great command line utility similar to ffmpeg, called (Sound eXchange) SoX. It can be found here:
http://sox.sourceforge.net/
The command line that achieves converting to 4 bit wav is this:
sox "sound.wav" -b 4 "sound_4bit.wav"
It works perfectly and I did not notice any quality drop as the sampling rate is still 44100, while the size drops to 1/4.
An important note. This works well only if your audio is clean and not recorded too loud, such as correctly recorded voice speech (this is what I am using it for), but also works for music as long as it's not too loud.
Hi I am using ffmpeg for windows phone which was found here. With this I am trying to convert a .ts file to .mp3 file but the command that I am using is not working for this type of conversion, I have also noted that it works for certain other format conversions such as ts to wma, ts to ogg e.t.c . The commands that I have tried are
-i sourcewithfullpath.ts destinationwithfullpath.mp3
-i sourcewithfullpath.ts
-f destinationwithfullpath.mp3
-i sourcewithfullpath.ts
-c:a libmp3lame destinationwithfullpath.mp3
-i sourcewithfullpath.ts
-acodec mp3 destinationwithfullpath.mp3
most of these gave me an AccessViolationException while calling ffmpeg.Run()
Any help is appreciated.
I think Mulvya is right MP3 encoding is not included in this FFMPEG build but I figured out another way which does not satisfy the exact need still a good option
-i sourcewithfullpath.ts
-f destinationwithfullpath.mp2
FFMPEG does support mp2 format, the file was converted to mp2 audio and renamed to mp3, it is playable in the windows phone. Please note that the compression ratio of the mp2 format is not that good compared to mp3 format which means that the output file resulted in a much greater size which was almost double.
I am developing an application that I can get series of Images from IP camera.
Now I want make video from those image. Can anyone help me in creating a video of any format from still images using C#?
You could just use ffmpeg behind the scenes to do so.
Use ffmpeg, http://ffmpeg.org/
FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video.
Also...
FFmpeg is free software licensed under the LGPL or GPL depending on your choice of configuration options. If you use FFmpeg or its constituent libraries, you must adhere to the terms of the license in question. You can find basic compliance information and get licensing help on our license and legal considerations page.
From the documentation:
For creating a video from many images:
ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
bitmaps to avi file c# .Net
I am bit struck with an idea to convert the sequence image files into a single video file. I am using dotnet as a platform.How should i proceed. No clear idea...
And more to that need to add audio(mp3) speech while the image sequenceare displayed...
The general idea here is you want to pass your raw images through an encoder and encode the file that way. The encoder will take care of generating all your keyframes and intermediary (P and B) frames as well as generating any necessary decoding metadata that needs to be stored. On top of that running it through an encoding tool such as ffmpeg will also take care of saving the video file in a known container format and properly structuring your video headers. All of this is complicated and tedious to do by hand, not to mention error prone.
Whether you use ffmpeg or some other encoder it's up to you. I suggest using ffmpeg because it has the necessary functionality you need. If you want to do this all in code, ffmpeg is open source and you can wrap the pieces you need in a .net shell and call things that way. Though, keep in mind ffmpeg's licenses if you are developing a distributable application.
This should get you started: Making movies from image files using ffmpeg/mencoder
To add audio check this: https://stackoverflow.com/questions/1329333/how-can-i-add-audio-mp3-to-a-flv-just-video-with-ffmpeg
Now if you want to synchronize the audio and video (lets say the image sequence is people talking and the audio is their speech) you have a much more difficult problem on your hands. At this point you need to properly multiplex audio and video frames based on their durations. FFMpeg probably won't do that well since it will set each image in your video sequence to play at the same duration, which doesn't usually correlate properly with audio frames.