I am trying to make a real time video recorder with ffmpeg (not screen recorder, unity player recorder of Specific dimensions)
I am able to get the ARGB data, and so far I have been writing it to a bunch of bmps and then running ffmpegs concat command like
ffmpeg -i files.txt -i pictures/pic%06d.bmp output.mp4
With different codecs etc, and my files.txt is essentially (pseudo)
ffconcat version 1.0 file pic000000.bmp duration 0.016 #etc, basically the durations were generated from time stamps
Anyways that all works believe it or not, but writing the files to disk as bmp (or even encoding them as a compressed format then writing that to disk) takes up a lot of extra time and I would prefer to pipe the data directly to ffmpeg
I know in some cases you can input a file by using the - operator then in whatever programming language the prices was started from pass on the byte data though stdin I am pretty sure, although the problem:
I have only been able to find out how to do this with a set framerate, but not with the concat, and I (think?) I need to use concat here because it's very important that the images have an exact time stamp on the body to line up with audio, as there will be a slight delay when capturing the frames, and so far I have been calculatabling each frames duration based on their timestamps (and the last one has no duration), in order to line them up perfectly with the audio, but as far as I can find the concat feature seems to require the files to already be written to the disk and then specified in a text file..
So is there any way to get a custom frame rate for each frame without writing the frames to disk first, and just piping them in? Does concat in any way support -? Is there another way I can line up the frames with audio? Do other video recording softwares face similar issues?
Pipe
I don't have access to your image/video/data generator, so I can't tell what it is doing, but you can at least try a pipe:
your_data_process - | ffmpeg -f rawvideo -framerate 25 -pixel_format argb -video_size 640x480 -i - output.mp4
your_data_process in this example is just a placeholder example for whatever is generating the video.
-f should be whatever format your_data_process is outputting. See ffmpeg -demuxers for a long list. This example assumes raw video.
A frame rate must be set but it does not necessarily mean it will mess up your timing.
If your player does like the output then add the -vf format=yuv420p output option.
Image
Another approach is to output to a single image file and atomically overwrite the file for each change. ffmpeg will then use the file as an input and will notice when it gets updated.
ffmpeg -f image2 -loop 1 -framerate 25 -i input.bmp output.mp4
Usually you don't have to manually set -f image2 for most image inputs. You have to in this case or it won't notice the image update. This is because it will select a simpler demuxer for some image formats.
In this example ffmpeg will duplicate frames to fill 25 fps until the next image update so the timing should be preserved.
If your player does like the output then add the -vf format=yuv420p output option.
Related
I have tried something along the lines of
C:\ffmpeg\ffmpeg -i "Blip_Select2.wav" -c:a wav -sample_fmt u8 "Blip_Select2_8bit.wav"
but I cannot figure out how to access a 4bit conversion.
I am using the audio for a c# project, 4 bit is sufficient and I prefer wav so I won't have to distribute a possibly restricted usage decoder with my files.
Just a heads up that -b 4 with sox seems to use MS ADPCM codec which encodes the difference between samples using 4 bits. If you want to produce a similar result using ffmpeg you can use:
ffmpeg -i sound.wav -codec:a adpcm_ms -f wav sound_4bit.wav
Ok, so I did manage to find a solution for this. It's a great command line utility similar to ffmpeg, called (Sound eXchange) SoX. It can be found here:
http://sox.sourceforge.net/
The command line that achieves converting to 4 bit wav is this:
sox "sound.wav" -b 4 "sound_4bit.wav"
It works perfectly and I did not notice any quality drop as the sampling rate is still 44100, while the size drops to 1/4.
An important note. This works well only if your audio is clean and not recorded too loud, such as correctly recorded voice speech (this is what I am using it for), but also works for music as long as it's not too loud.
If I take a .mp4 recorded on my mobile (Samsung S5) and pass it through FFmpeg with the below command, the output file (fileX.avi) is a greyscale bitmap uncompressed video file.
The offset values in fileX.avi (output from FFmpeg) to allow me to locate the video frame data are always 5680 bytes for the file header.
And 62 bytes for the inter frame header.
The data is uncompressed RGB24 so i can easily calculate the size of a video frame from height x width x 3.
So my C# application can access the video frames in fileX.avi always at these above offsets.
(This works great).
My FFmpeg Command is:
ffmpeg.exe -i source.mp4 -b 1150 -r 20.97 -g 120 -an -vf format=gray -f rawvideo -pixfmt gray -s 384x216 -vcodec rawvideo -y fileX.avi
However... I recently took an .mp4 file from a different source (produced by Power Director 14 instead of direct from my mobile phone) and used this as the input source.mp4. But now the structure of fileX.avi differs as the offset values of 5680 + 62 bytes from the start in fileX.avi do not land me at the start of the video data frames.
There seems to be different file formats for .mp4 - and obviously if there are my crude offset approach will not work for them all. I suspected at the time I wrote the code my method was all too easy a solution!
So can anyone advise on the approach I should take now? Should I check the original .mp4 or the output file (fileX.avi) to determine a "file type" to which I can determine the different offsets?
At the very least I need to be able to identify the "type" of .mp4 file that works so I can declare the type that will work with my software.
I have a URL (<ip>/ipcam/mpeg4.cgi) which points to my IP camera which is connected via Ethernet.
Accessing the URL resuls in a infinite stream of video (possibly with audio) data.
I would like to store this data into a video file and play it later with a video player (HTML5's video tag is preferred as the player).
However, a straightforward approach, which is simple saving the stream data into .mp4 file, didn't work.
I have looked into the file and here is what I saw (click to enlarge):
It turned out, there are some HTML headers, which I further on manually excluded using the binary editing tool, and yet no player could play the rest of the file.
The HTML headers are:
--myboundary
Content-Type: image/mpeg4
Content-Length: 76241
X-Status: 0
X-Tag: 1693923
X-Flags: 0
X-Alarm: 0
X-Frametype: I
X-Framerate: 30
X-Resolution: 1920*1080
X-Audio: 1
X-Time: 2000-02-03 02:46:31
alarm: 0000
My question is pretty clear now, and I would like any help or suggestion. I suspect, I have to manually create some MP4 headers myself based on those values above, however, I fail to understand format descriptions such as these.
I have the following video stream settings on my IP camera (click to enlarge):
I could also use the ffmpeg tool, but no matter how I try and mix the arguments to the program, it keeps telling me this error:
It looks like your server is sending H.264 encoded 'rawvideo' in Annex B byte stream format.
It might be reformatted to .mp4 with something like below command line:
ffmpeg -i {input file} -f rawvideo -bsf h264_mp4toannexb -vcodec copy out.mp4
Saving audio/video streaming into file is not an easy job. If it's video only, using MPEG2 TS format is easiest way to go.
For .mp4 streaming, consider -movflags faststart -> Recommendation on the best quality/performance H264 encoder for video encoding?
** Update: -bsf h264_mp4toannexb option could be omitted, I'm not sure.
Well, this is not so straightforward as it seems:
1) HTML5 <video> tag has some requirements for the MP4 stream - it must be fragmented (it means that the internal data atoms that describe length and other data must be in the beginning of the stream). Most MP4 video files do not have this feature, so your option is to reformat them with FFmpeg or other tools (see this) and then you can actually provide the file as is.
2) Nginx has a plugin that allows streaming MP4 files, I haven't used it but it could be useful to you, since I guess it takes care of the internal stuff.
I have successfully converted avi files to Mpeg using NREco converter http://www.nrecosite.com/video_converter_net.aspx
But, the length (duration) of the converted video is never greater than 2mins, 35 secs.
I tried using ffmpeg command line utility (https://www.ffmpeg.org/download.html or http://ffmpeg.zeranoe.com/builds/ ffmpeg 64 bit static for windows) but the length was always less than or equal to 2mins, 35 seconds.
How to increase the duration of the ffmpeg converted video?
I tried the -t command but couldn't increase the length (duration) of the converted video. Original video is a 14mins 5 sec avi file.
ffmpeg -i inputAVIfilename outputMPEGfilename
ffmpeg -i inputAVIfilename -t 90000 outputMPEGfilename
The video file has only bitmap images. No sound tracks are required.
Please note that my dll would be used with both windows & web applications.
Converting of videos of from one format to other can be done either using software or hardware. In your case you are using ffmpeg which is software based solution. Generally speaking software based solutions are a lot slower, inefficient and has many operating system restrain, and I am suspecting you have reached that limit.
I suggest that you use cloud based solution such as Azure Media service or Elementals.
First:
well i'm tired of asking the same question, i do know that i can ask about Coding techniques, not just specific problematic points.
Background:
my project is to make a Desktop Recorder, i tried WM Encoder it works but double click don't work at most cases "ex: can't open My Computer! i have to press Enter!", i made a search and it turned out that WM Encoder is the problem and it's matter of waiting a new version to solve this problem of double click.
Now:
this is my previous questions relate to my problem
Re size and compress jpeg
sort files from folder in String[] Array
create a video from a list of JPEG files
Combine images into a movie
So:
How to save list of images in one video using ffmpeg in step by step?
i got this ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi but i have no idea how to use it in my project or where i m going to put this code.