Streaming video from C# server to wowza streaming engine - c#

I'm facing problem with restreaming .h264 video received from my device via tcp to wowza streaming engine. The problem is that I do not know how to forward byte array (byte[]). I have read that it is possible via rstp/rtmp/mpegts but I have not found any library to do this operation. I know that video I receive is ok because after saving frames to file I'm able to send it to wowza using ffmpeg. I have been also trying to use ffmpeg to listen on udp ip and port and on http ip and port but nothing happened.
My question is:
Is it possible to send bytes to ffmpeg without saving file on hard drive?

Related

Video streaming for local network using C#

I need to stream a video which is located in a local network between two computers using C#. I am thinking to use TCP rather than UDP since I need reliability. I already build a simple program for data transfer with TCP/IP. Now I want to build a TCP communication for video streaming. Is there any sample code or suggestion for both the server and client side for that?
I already try to open video in local network and tried to encode the video same as I do on data transfer but it didn't work.
Thanks.

UWP IoT Core RTSP Streaming Audio

I have managed to establish TCP host/clients sockets between multiple Raspberry Pi3. I would like to find out how to stream audio via RTSP among the host and clients. I have seen a lot of video streaming discussion but I haven't run into any audio streaming thread which I could reference. Can anyone help?
Thanks.
RTSP is a realtime streaming protocol. It means that you can stream whatever you want in real time, such as video, audio, text and so on. RTP is a transport protocol which is used to transport media data which is negotiated over RTSP. So we can say RTSP is over RTP. The library SharpRSTP which is mentioned in my comment, supports in UWP. If you want stream audio only, you can use the G711Payload. G711 is a general audio encoder.Of course, you can find some other library from network. You can refer to the RTSPClient.cs sample in the repo.
You can search a sample(titile is TCP Audio Streamer and Player (Voice Chat over IP)), it works fine.This is a proprietary VoIP project to send and receive audio data by TCP. You can move the client to UWP.

Hololens remote supply and video, audio live streaming

We are currently developing a Hololens remote support module using LowLatencyMRVC. I would like to know if there is a setting or option to receive a voice when receiving Hololens video from LowLatencyMRVC. Then, the audio data is recorded for one second using a socket and made into a wav file, and the created file is sent to the client through the server to reproduce the wav file received by the client. However, when the voice data reception and LowLatencyMRVC are combined, the load of the program in Hololens is increased, and the program is terminated. Are there any suggestions on how to solve the problem or voice communication?

Playing streamed audio data (C#)

I an trying to develop a windows application using C# that can play streamed audio data. Basically, I will have a client application that is responsible for playing different audio files. Currently, from the client application, I am extracting the hardware config param from the file header and then will stream the file data (PCM stream) over network.
So is it possible to use the hardware config params sent from client to configure the actual hardware (on the server end) and then give it the file data stream to it so that it can play the audio data.
While searching, I got to know about NAudio. Is NAudio capable of doing this stuff or the better option for me would be to switch to nativ C/C++ code using Directsound APIS.
update:
By configuring hardware, I mean setting the param related to audio playback. These param would include sample rate (eg: 44100 Hz), number of channels (eg: stereo), storage format (eg: 16 bit little endian) etc.
My client application is on Linux and I have planted an ALSA driver that intercepts PCM stream and hw_params configuration and then send them to server.
update ends
Thanks.
If you look at the latest NAudio code, you will see there are two examples in the NAudioDemo sample app that play streaming audio. One is a rudimentary chat application that sends compressed voice via UDP, the other plays streaming MP3 internet radio. I'd suggest you have a look at that and try the sample app to see if it meets your needs.

How does one capture H.323 voice traffic on a VOIP network?

What I am trying to do is capture the WAV data of a phone conversation on a VOIP network using SharpPCap/PCap.Net.
We are using the H.323 recommendation and my understanding is that voice data is located in the RTP packets. However, there is no way to heuristically determine if a UDP packet is a RTP packet, so we have to do more work before we can capture the data.
The H.323 recommendation apparently uses a lot of traffic on specific TCP ports to negotiate the call before the WAV data is sent via RTP. However, I am having very little luck determining what data is actually sent on those TCP ports, when it is sent, what the packets look like, how to handle it, etc.
If anyone has any information on how to go about this I'd really appreciate it. My Google-Fu seems to be failing me on this one.
Wireshark is your friend... I imagine it still has a plugin that will allow you to select a VoIP stream and then save to file. The fun part will be if you are using a switched network.
Wireshark + VoIP
you have to parse h.323 OLC message from both sides then you will be able to know what pakets to capture

Categories