Issues mapping multiple Axis RTSP streams using FFMPEG - c#

I have an application built in C# that leverages FFMPEG to map 3 h264 RTSP streams from Axis cameras along with a gdigrab screen recording and saves everything to a file using this command:
-rtsp_transport tcp -i rtsp://192.168.1.200/axis-media/media.amp -rtsp_transport tcp -i rtsp://192.168.2.200/axis-media/media.amp -rtsp_transport tcp -i rtsp://192.168.3.200/axis-media/media.amp -r 30 -f gdigrab -framerate 1 -i title="MainWindow" -c copy -map 0 -vcodec copy -map 1 -metadata title="6-26-2017-4-22-PM- TEST VIDEO" -vcodec copy -map 2 -vcodec copy -map 3 -vcodec h264 -preset ultrafast C:\Users\*USERNAME*\6-26-2017-4-22-PM-cam1comb.mkv
The issue that I'm having is that the Axis camera streams are out of sync with each other, with approximately a 3 second delay between the three streams. When I test the FFMPEG command with all inputs changed to this stream rtsp://mpv.cdn3.bigCDN.com:554/bigCDN/_definst_/mp4:bigbuckbunnyiphone_400.mp4 as well as the screen grab however, everything works perfectly.

Related

Save incoming frames from camera to video using ffmpeg

I'm trying to record frames that I read from an industrial camera (such as - https://en.ids-imaging.com/store/u3-3680xle.html) in my C# code as bitmaps.
I want to convert those bitmaps to video on the fly.
My solution until now was -> Sending those bitmaps to a virtual camera (e2e soft vcam) and then record the camera with ffmpeg - using this command:
-f dshow -i video=VCam -r 18 -vcodec libx264 Video.mp4
This is not working so well, because there are drop frames and the video is not smooth.
There is another way to use ffmpeg for converting those images to video on the flight?
Thank you!

Capture desktop/screen and merge sound from laptop mic

I am trying to create desktop application to create video from
to capture screen.
to record sound from mic.
to merge 1 with 2 to create video.
or
any easy way to do it in c# or vb.net language.
I used aforg.net which is supporting only video.
I avoid to use third party tools, specially expensive tools.
I use ffmpeg for this, video via screen-capture-recorder (https://trac.ffmpeg.org/wiki/Capture/Desktop). I control the region that screen-capture-recorder records by writing to the registry before i start ffmpeg. I get audio from virtual audio cable (https://vac.muzychenko.net/en/) (but it's because I record a machine that has no sound card) - you should be able to do it with whatever your mic device is called. You could use something like NAudio to enumerate them, or get ffmpeg to enumerate them and parse its output - https://trac.ffmpeg.org/wiki/DirectShow
I capture two audio streams, using the following ffmpeg args
-f dshow -i video="screen-capture-recorder" -thread_queue_size 512 -f dshow -i audio="Line 2 (Virtual Audio Cable)" -f dshow -i audio="Line 3 (Virtual Audio Cable)" -map 0:v -map 1:a -map 2:a -pix_fmt yuv420p -y "{0}"
The C# app is responsible for a lot of things, such as taking a screenshot, looking for the thing I want to record, position the region, start ffmpeg etc.. But ff does the heavy lifting.. You don't even need to write any c# for starters, just get FFmpeg working from the command line and recording nicely with various buffer settings etc, then put it to a c# program with Process.Start(command, arguments)

How to handle differing .mp4 file types from different sources?

If I take a .mp4 recorded on my mobile (Samsung S5) and pass it through FFmpeg with the below command, the output file (fileX.avi) is a greyscale bitmap uncompressed video file.
The offset values in fileX.avi (output from FFmpeg) to allow me to locate the video frame data are always 5680 bytes for the file header.
And 62 bytes for the inter frame header.
The data is uncompressed RGB24 so i can easily calculate the size of a video frame from height x width x 3.
So my C# application can access the video frames in fileX.avi always at these above offsets.
(This works great).
My FFmpeg Command is:
ffmpeg.exe -i source.mp4 -b 1150 -r 20.97 -g 120 -an -vf format=gray -f rawvideo -pixfmt gray -s 384x216 -vcodec rawvideo -y fileX.avi
However... I recently took an .mp4 file from a different source (produced by Power Director 14 instead of direct from my mobile phone) and used this as the input source.mp4. But now the structure of fileX.avi differs as the offset values of 5680 + 62 bytes from the start in fileX.avi do not land me at the start of the video data frames.
There seems to be different file formats for .mp4 - and obviously if there are my crude offset approach will not work for them all. I suspected at the time I wrote the code my method was all too easy a solution!
So can anyone advise on the approach I should take now? Should I check the original .mp4 or the output file (fileX.avi) to determine a "file type" to which I can determine the different offsets?
At the very least I need to be able to identify the "type" of .mp4 file that works so I can declare the type that will work with my software.

Make lower bitrate(lower resolution) video on concate time using ffmpeg

I have concated multiple video using ffmpeg but i need lower resolution video(compress it's size) because sometimes videos are too big so it take so much time.
I have used this command :
"ffmpeg -f concat -i input.txt -c copy out.mp4"
Please help for compress video.
Thanks in advance.
Instead of using '-c copy' (which will just remux the video from the source without compression) you can choose an appropriate video encoder to use. As your target is mp4 then you probably want to use libx264 eg:
ffmpeg -f concat -i input.txt -vcodec libx264 -preset fast -acodec copy -b:v 3808k
This would create a ~ 4mbit video (depending on your source audio), you can experiment with the -b:v param to suit your needs and may find you dont need to alter your resolution.
To alter the resolution a scaling video filter would do the job eg:
ffmpeg -f concat -i input.txt -vf scale=-1:720 -vcodec libx264 -preset fast -acodec copy -b:v 3808k
Will produce a video with a vertical resolution of 720 and scale the horizontal resolution to match the aspect ratio of the input.

Optimum encoding standard for flowplayer to play mp4

I'm using flow player 3.1.1 for streaming videos to my browser.The videos are uploaded by the users and they may upload different formats. What will be solution to stream the videos as mp4 , what ever be the format they upload. I'm currently using ffmpeg commands.
ffmpeg -i "InputFile.mp4" -sameq -vcodec libx264 -r 35 -acodec libfaac -y "OutputFile.mp4"
But video files of more size(say 100mb) are taking a minute more for laoding in to the flowplayer and buffering. I think the problem with my encoding.
Welcome your valuable Suggestions!!!
The problem come from metadata. ffmpeg put this data at the end of file, for a progressive download you must move this data at the begininng. You can use MP4Box or qt-faststart after ffmpeg process.
MP4Box -inter 1000 file.mp4 or qt-faststart in.mp4 out.mp4

Categories