I have a camera sending raw frames to my application and I need to generate a h264 stream from such frames and make it playable via browser with low latency. My idea is to use a webRTC stream in order to keep latency at minimum.
Until now my approach has been the following:
Use FFmpeg to generate a h264/RTSP stream by means of the command
ffmpeg -fflags nobuffer -re -i "frames%05d.bmp" -pix_fmt yuv420p -c:v libx264 -crf 23 -f rtsp rtsp://localhost:8554/mystream
Use RTSP simple server to publish the RTSP stream.
Use RTSPtoWeb to generate a webRTC stream playable by browsers.
Please note input frames are 728x544 bitmaps.
Until now I had no luck: the RTSP stream produced at step (2) is playable by means of VLC but is has problems when played by means of webRTC, e.g. continuous freezes. Please note I can reproduce h264/RTSP stream produced by AXIS IP cameras by means of RTSPtoWeb with no problem.
Furthermore I'd need the frames to be passed to FFmpeg by a C# application instead of reading them from disk.
Of course if you know of a way to directly generate a h264/webRTC stream from an image sequence that would be wounderful.
Has anyone ever tried something like this?
I have a URL (<ip>/ipcam/mpeg4.cgi) which points to my IP camera which is connected via Ethernet.
Accessing the URL resuls in a infinite stream of video (possibly with audio) data.
I would like to store this data into a video file and play it later with a video player (HTML5's video tag is preferred as the player).
However, a straightforward approach, which is simple saving the stream data into .mp4 file, didn't work.
I have looked into the file and here is what I saw (click to enlarge):
It turned out, there are some HTML headers, which I further on manually excluded using the binary editing tool, and yet no player could play the rest of the file.
The HTML headers are:
--myboundary
Content-Type: image/mpeg4
Content-Length: 76241
X-Status: 0
X-Tag: 1693923
X-Flags: 0
X-Alarm: 0
X-Frametype: I
X-Framerate: 30
X-Resolution: 1920*1080
X-Audio: 1
X-Time: 2000-02-03 02:46:31
alarm: 0000
My question is pretty clear now, and I would like any help or suggestion. I suspect, I have to manually create some MP4 headers myself based on those values above, however, I fail to understand format descriptions such as these.
I have the following video stream settings on my IP camera (click to enlarge):
I could also use the ffmpeg tool, but no matter how I try and mix the arguments to the program, it keeps telling me this error:
It looks like your server is sending H.264 encoded 'rawvideo' in Annex B byte stream format.
It might be reformatted to .mp4 with something like below command line:
ffmpeg -i {input file} -f rawvideo -bsf h264_mp4toannexb -vcodec copy out.mp4
Saving audio/video streaming into file is not an easy job. If it's video only, using MPEG2 TS format is easiest way to go.
For .mp4 streaming, consider -movflags faststart -> Recommendation on the best quality/performance H264 encoder for video encoding?
** Update: -bsf h264_mp4toannexb option could be omitted, I'm not sure.
Well, this is not so straightforward as it seems:
1) HTML5 <video> tag has some requirements for the MP4 stream - it must be fragmented (it means that the internal data atoms that describe length and other data must be in the beginning of the stream). Most MP4 video files do not have this feature, so your option is to reformat them with FFmpeg or other tools (see this) and then you can actually provide the file as is.
2) Nginx has a plugin that allows streaming MP4 files, I haven't used it but it could be useful to you, since I guess it takes care of the internal stuff.
I'm developing a C# video streaming application.
At the moment I was able to capture video frames using opencv and encode it using ffmpeg and also capture and encode
audio using ffmpeg and save data in to queue.
The problem that I am facing is that when I send both streams simultaneously I am losing video stream.That means I stream it by first sending a video packet and then an audio packet. But, the player identifies video stream first and start to play. And did not play audio packets.
Can any one give a method to do this task?
It will be greatly appreciated.
I'm creating an video chat application but I'm having issues streaming microphone audio. I have video streams working already but I was hoping to find out the best method of capturing a laptop's built in microphone and stream it. At the moment I'm using a NetworkStream for sending the video. I have limited experience with NAudio (http://naudio.codeplex.com/) but every example of microphone capture doesn't seem to include a New Audio Frame event (which is my method for sending video frames).
I've been looking at http://voicerecorder.codeplex.com/ but it seems to be more than I need and doesn't cover streaming.
How do I capture microphone audio and stream it, if possible without the overhead of saving the audio to a file? I'd appreciate a simple example.
Create a new WaveIn object
call StartRecording
in the DataAvailable event handler, transmit args.BytesRecorded bytes from args.Buffer across the network.
Note that this will mean you are transmitting PCM, which is not very efficient. Normally for network streaming you would use a codec. In the NAudioDemo source code there is a Network Chat demo showing this in action.
I am developing a voip call recording system in c#. I have used ffmpeg.exe to decode the streams.
what i have done till now is that catch all incoming and outgoing rtp packets and write to sepperate files(only the payload part). And by ffmpeg i used to convert into mp3 each then mix both the incoming and outgoing audio.
And now i found out that in single stream both the g729 and g711 codecs are coming. The decoding commands are different for both the codecs.
So i need to decode both seperately. But i dont know how will i overcome this issue.
Can anyone suggest me a solution ?
I am using Wireshark to analyse the packets.
Edit
Below i added filter details
Is there any open source softwares for call recording other than oreca ?
Thank you.