I am trying to create desktop application to create video from
to capture screen.
to record sound from mic.
to merge 1 with 2 to create video.
or
any easy way to do it in c# or vb.net language.
I used aforg.net which is supporting only video.
I avoid to use third party tools, specially expensive tools.
I use ffmpeg for this, video via screen-capture-recorder (https://trac.ffmpeg.org/wiki/Capture/Desktop). I control the region that screen-capture-recorder records by writing to the registry before i start ffmpeg. I get audio from virtual audio cable (https://vac.muzychenko.net/en/) (but it's because I record a machine that has no sound card) - you should be able to do it with whatever your mic device is called. You could use something like NAudio to enumerate them, or get ffmpeg to enumerate them and parse its output - https://trac.ffmpeg.org/wiki/DirectShow
I capture two audio streams, using the following ffmpeg args
-f dshow -i video="screen-capture-recorder" -thread_queue_size 512 -f dshow -i audio="Line 2 (Virtual Audio Cable)" -f dshow -i audio="Line 3 (Virtual Audio Cable)" -map 0:v -map 1:a -map 2:a -pix_fmt yuv420p -y "{0}"
The C# app is responsible for a lot of things, such as taking a screenshot, looking for the thing I want to record, position the region, start ffmpeg etc.. But ff does the heavy lifting.. You don't even need to write any c# for starters, just get FFmpeg working from the command line and recording nicely with various buffer settings etc, then put it to a c# program with Process.Start(command, arguments)
Related
I have a camera sending raw frames to my application and I need to generate a h264 stream from such frames and make it playable via browser with low latency. My idea is to use a webRTC stream in order to keep latency at minimum.
Until now my approach has been the following:
Use FFmpeg to generate a h264/RTSP stream by means of the command
ffmpeg -fflags nobuffer -re -i "frames%05d.bmp" -pix_fmt yuv420p -c:v libx264 -crf 23 -f rtsp rtsp://localhost:8554/mystream
Use RTSP simple server to publish the RTSP stream.
Use RTSPtoWeb to generate a webRTC stream playable by browsers.
Please note input frames are 728x544 bitmaps.
Until now I had no luck: the RTSP stream produced at step (2) is playable by means of VLC but is has problems when played by means of webRTC, e.g. continuous freezes. Please note I can reproduce h264/RTSP stream produced by AXIS IP cameras by means of RTSPtoWeb with no problem.
Furthermore I'd need the frames to be passed to FFmpeg by a C# application instead of reading them from disk.
Of course if you know of a way to directly generate a h264/webRTC stream from an image sequence that would be wounderful.
Has anyone ever tried something like this?
I'm trying to record frames that I read from an industrial camera (such as - https://en.ids-imaging.com/store/u3-3680xle.html) in my C# code as bitmaps.
I want to convert those bitmaps to video on the fly.
My solution until now was -> Sending those bitmaps to a virtual camera (e2e soft vcam) and then record the camera with ffmpeg - using this command:
-f dshow -i video=VCam -r 18 -vcodec libx264 Video.mp4
This is not working so well, because there are drop frames and the video is not smooth.
There is another way to use ffmpeg for converting those images to video on the flight?
Thank you!
I am trying to make a real time video recorder with ffmpeg (not screen recorder, unity player recorder of Specific dimensions)
I am able to get the ARGB data, and so far I have been writing it to a bunch of bmps and then running ffmpegs concat command like
ffmpeg -i files.txt -i pictures/pic%06d.bmp output.mp4
With different codecs etc, and my files.txt is essentially (pseudo)
ffconcat version 1.0 file pic000000.bmp duration 0.016 #etc, basically the durations were generated from time stamps
Anyways that all works believe it or not, but writing the files to disk as bmp (or even encoding them as a compressed format then writing that to disk) takes up a lot of extra time and I would prefer to pipe the data directly to ffmpeg
I know in some cases you can input a file by using the - operator then in whatever programming language the prices was started from pass on the byte data though stdin I am pretty sure, although the problem:
I have only been able to find out how to do this with a set framerate, but not with the concat, and I (think?) I need to use concat here because it's very important that the images have an exact time stamp on the body to line up with audio, as there will be a slight delay when capturing the frames, and so far I have been calculatabling each frames duration based on their timestamps (and the last one has no duration), in order to line them up perfectly with the audio, but as far as I can find the concat feature seems to require the files to already be written to the disk and then specified in a text file..
So is there any way to get a custom frame rate for each frame without writing the frames to disk first, and just piping them in? Does concat in any way support -? Is there another way I can line up the frames with audio? Do other video recording softwares face similar issues?
Pipe
I don't have access to your image/video/data generator, so I can't tell what it is doing, but you can at least try a pipe:
your_data_process - | ffmpeg -f rawvideo -framerate 25 -pixel_format argb -video_size 640x480 -i - output.mp4
your_data_process in this example is just a placeholder example for whatever is generating the video.
-f should be whatever format your_data_process is outputting. See ffmpeg -demuxers for a long list. This example assumes raw video.
A frame rate must be set but it does not necessarily mean it will mess up your timing.
If your player does like the output then add the -vf format=yuv420p output option.
Image
Another approach is to output to a single image file and atomically overwrite the file for each change. ffmpeg will then use the file as an input and will notice when it gets updated.
ffmpeg -f image2 -loop 1 -framerate 25 -i input.bmp output.mp4
Usually you don't have to manually set -f image2 for most image inputs. You have to in this case or it won't notice the image update. This is because it will select a simpler demuxer for some image formats.
In this example ffmpeg will duplicate frames to fill 25 fps until the next image update so the timing should be preserved.
If your player does like the output then add the -vf format=yuv420p output option.
I have tried something along the lines of
C:\ffmpeg\ffmpeg -i "Blip_Select2.wav" -c:a wav -sample_fmt u8 "Blip_Select2_8bit.wav"
but I cannot figure out how to access a 4bit conversion.
I am using the audio for a c# project, 4 bit is sufficient and I prefer wav so I won't have to distribute a possibly restricted usage decoder with my files.
Just a heads up that -b 4 with sox seems to use MS ADPCM codec which encodes the difference between samples using 4 bits. If you want to produce a similar result using ffmpeg you can use:
ffmpeg -i sound.wav -codec:a adpcm_ms -f wav sound_4bit.wav
Ok, so I did manage to find a solution for this. It's a great command line utility similar to ffmpeg, called (Sound eXchange) SoX. It can be found here:
http://sox.sourceforge.net/
The command line that achieves converting to 4 bit wav is this:
sox "sound.wav" -b 4 "sound_4bit.wav"
It works perfectly and I did not notice any quality drop as the sampling rate is still 44100, while the size drops to 1/4.
An important note. This works well only if your audio is clean and not recorded too loud, such as correctly recorded voice speech (this is what I am using it for), but also works for music as long as it's not too loud.
I'm using flow player 3.1.1 for streaming videos to my browser.The videos are uploaded by the users and they may upload different formats. What will be solution to stream the videos as mp4 , what ever be the format they upload. I'm currently using ffmpeg commands.
ffmpeg -i "InputFile.mp4" -sameq -vcodec libx264 -r 35 -acodec libfaac -y "OutputFile.mp4"
But video files of more size(say 100mb) are taking a minute more for laoding in to the flowplayer and buffering. I think the problem with my encoding.
Welcome your valuable Suggestions!!!
The problem come from metadata. ffmpeg put this data at the end of file, for a progressive download you must move this data at the begininng. You can use MP4Box or qt-faststart after ffmpeg process.
MP4Box -inter 1000 file.mp4 or qt-faststart in.mp4 out.mp4