I'm developing a C# video streaming application.
At the moment I was able to capture video frames using opencv and encode it using ffmpeg and also capture and encode
audio using ffmpeg and save data in to queue.
The problem that I am facing is that when I send both streams simultaneously I am losing video stream.That means I stream it by first sending a video packet and then an audio packet. But, the player identifies video stream first and start to play. And did not play audio packets.
Can any one give a method to do this task?
It will be greatly appreciated.
Related
I have read through several posts about H264 recording, but none of them really answer my question, so here is what I am trying to do.
A server is sending H264 encoded video packets to me, and I would like to capture and turn the packets into a video file (.mpeg or .avi)
Here is how I envision the setup:
I need to setup UDP listener to capture the video packets, then send the packet payload to a DirecShow graph for processing.
The DirectShow graph should consist of a H264 decoder and a MPEG encoder.
Am I on the right track?
Thank you
If all you want to do is capture the h.264 stream and stick it into a container, I would utilize FFmpeg. I don't know the exact command line, so this is untested but try something like...
ffmpeg -i - -f mp4 output.mp4
Then, write to it over STDIN. It should detect your stream type after several packets, and begin writing to the MP4 file.
I'm creating an video chat application but I'm having issues streaming microphone audio. I have video streams working already but I was hoping to find out the best method of capturing a laptop's built in microphone and stream it. At the moment I'm using a NetworkStream for sending the video. I have limited experience with NAudio (http://naudio.codeplex.com/) but every example of microphone capture doesn't seem to include a New Audio Frame event (which is my method for sending video frames).
I've been looking at http://voicerecorder.codeplex.com/ but it seems to be more than I need and doesn't cover streaming.
How do I capture microphone audio and stream it, if possible without the overhead of saving the audio to a file? I'd appreciate a simple example.
Create a new WaveIn object
call StartRecording
in the DataAvailable event handler, transmit args.BytesRecorded bytes from args.Buffer across the network.
Note that this will mean you are transmitting PCM, which is not very efficient. Normally for network streaming you would use a codec. In the NAudioDemo source code there is a Network Chat demo showing this in action.
I am developing a voip call recording system in c#. I have used ffmpeg.exe to decode the streams.
what i have done till now is that catch all incoming and outgoing rtp packets and write to sepperate files(only the payload part). And by ffmpeg i used to convert into mp3 each then mix both the incoming and outgoing audio.
And now i found out that in single stream both the g729 and g711 codecs are coming. The decoding commands are different for both the codecs.
So i need to decode both seperately. But i dont know how will i overcome this issue.
Can anyone suggest me a solution ?
I am using Wireshark to analyse the packets.
Edit
Below i added filter details
Is there any open source softwares for call recording other than oreca ?
Thank you.
I have separately sent Audio and Video over the network but i want to send synchronized audio video data over the network.if any one could me to the solution it would be greatly appreciated.
p.s: i am using Naudio for audio and DirectShow for video.
I assume the first point you have to make sure is that you are generating the RTCP packets at some fixed interval of time. RTCP maps the rtp timestamps with NTP. And this is required at the client end for syncing audio and video.
I'm quite new to Windows programming and I'm trying to setup a directshow graph to stream a webcam feed over a network so I can monitor the output on an ipad.
I've setup a graph in csharp using DirectShowLib and FFDShow that compresses the raw output from a capture device and writes it to an avi file.
Now I'm trying to work out how to broadcast the stream over the network.
The only sample code I can find for network streaming with directshow relates to the WM library which only seems to output Asf formatted streams.
How can I broadcast a stream in a format other than Asf using directshow? Can I configure the ASF Writer to output an avi/mpeg formated stream or do I need to write my own directshow filter?
Are there any examples of streaming avi over a network using directshow?
Thanks for reading,
Josh
Well I ended up using VLC to create an MJPEG stream. I did try using VLS's HLS plugin but I found that iOS will only play one video at a time which was no good as I want to display several webcams. MJPEG gets round this.