We are currently developing a Hololens remote support module using LowLatencyMRVC. I would like to know if there is a setting or option to receive a voice when receiving Hololens video from LowLatencyMRVC. Then, the audio data is recorded for one second using a socket and made into a wav file, and the created file is sent to the client through the server to reproduce the wav file received by the client. However, when the voice data reception and LowLatencyMRVC are combined, the load of the program in Hololens is increased, and the program is terminated. Are there any suggestions on how to solve the problem or voice communication?
Related
I need to stream a video which is located in a local network between two computers using C#. I am thinking to use TCP rather than UDP since I need reliability. I already build a simple program for data transfer with TCP/IP. Now I want to build a TCP communication for video streaming. Is there any sample code or suggestion for both the server and client side for that?
I already try to open video in local network and tried to encode the video same as I do on data transfer but it didn't work.
Thanks.
I have develop a winforms application in c#.net. Essentially
capture audio from microphone
capture video from webcam
I want send this two buffer from client to other client like a video call application.
I read that I would use RTP client/server protocol but I don't know how to get this.
Can you advise me how merge both audio and video data and send it to socket from client to client? Or simple send separate audio and video with rtp protocol.
I use this library to capture audio and video: accord-net framework
I have managed to establish TCP host/clients sockets between multiple Raspberry Pi3. I would like to find out how to stream audio via RTSP among the host and clients. I have seen a lot of video streaming discussion but I haven't run into any audio streaming thread which I could reference. Can anyone help?
Thanks.
RTSP is a realtime streaming protocol. It means that you can stream whatever you want in real time, such as video, audio, text and so on. RTP is a transport protocol which is used to transport media data which is negotiated over RTSP. So we can say RTSP is over RTP. The library SharpRSTP which is mentioned in my comment, supports in UWP. If you want stream audio only, you can use the G711Payload. G711 is a general audio encoder.Of course, you can find some other library from network. You can refer to the RTSPClient.cs sample in the repo.
You can search a sample(titile is TCP Audio Streamer and Player (Voice Chat over IP)), it works fine.This is a proprietary VoIP project to send and receive audio data by TCP. You can move the client to UWP.
I'm facing problem with restreaming .h264 video received from my device via tcp to wowza streaming engine. The problem is that I do not know how to forward byte array (byte[]). I have read that it is possible via rstp/rtmp/mpegts but I have not found any library to do this operation. I know that video I receive is ok because after saving frames to file I'm able to send it to wowza using ffmpeg. I have been also trying to use ffmpeg to listen on udp ip and port and on http ip and port but nothing happened.
My question is:
Is it possible to send bytes to ffmpeg without saving file on hard drive?
I an trying to develop a windows application using C# that can play streamed audio data. Basically, I will have a client application that is responsible for playing different audio files. Currently, from the client application, I am extracting the hardware config param from the file header and then will stream the file data (PCM stream) over network.
So is it possible to use the hardware config params sent from client to configure the actual hardware (on the server end) and then give it the file data stream to it so that it can play the audio data.
While searching, I got to know about NAudio. Is NAudio capable of doing this stuff or the better option for me would be to switch to nativ C/C++ code using Directsound APIS.
update:
By configuring hardware, I mean setting the param related to audio playback. These param would include sample rate (eg: 44100 Hz), number of channels (eg: stereo), storage format (eg: 16 bit little endian) etc.
My client application is on Linux and I have planted an ALSA driver that intercepts PCM stream and hw_params configuration and then send them to server.
update ends
Thanks.
If you look at the latest NAudio code, you will see there are two examples in the NAudioDemo sample app that play streaming audio. One is a rudimentary chat application that sends compressed voice via UDP, the other plays streaming MP3 internet radio. I'd suggest you have a look at that and try the sample app to see if it meets your needs.