I am challenged with the task to stream video and audio (camera and mic) from my android API 19 devices directly to a server. Displaying (on the device) is also optional.
Following this link I learned that you can use Mediarecorder with a socket instead of an output file. But they describe this method is to slow (3 seconds delay), because of the encoding. They also show a second method with preview frames (as MJpeg) but only for video.
Some apps from the store use the same method with an additional audio recorder, which causes a big delay between image and sound. I was hoping someone knows how to live stream from android to a server over wifi. C# code would be much appreciated. Third-party libraries are also welcome as long as they are free for commercial use. Any help/ideas are welcome and thanks in advance.
Related
I'm making a program for do a livestream of the XSplit Encoder (RTMP Server). I need to have a site to watch that stream, and this program, the player need to have a button to the spectator can choose your video quality, and the stream have to be fluid and have a good quality. Can someone explain this or send me a link to do that? Please
(C#)
This is a VERY large undertaking. And an impossible question to answer unless you narrow the scope. You need a ingest server that takes in RTMP, You need a machine with enough CPU power to do all the transcodes. You need site to playback on. You also need enough bandwidth (CDN) for all your viewers. How many viewers do you need to support? What platforms do you want to play back on? iOS? then you need HLS. Web? then you need RTMP. Or you can use DASH if your ok with limiting it to modern browsers. Do you want it to play in firefox? Then it MUST be flash, because firefox does not have native support for the h.264 codec. But flash wont play in iOS. You can use JWPlaver premium, that will play HLS in flash. Actually is h.264 the codec you intend to use? Have you looked into services such as Zencoder live transcoding? Or Wowza with the transcoding module? Amazon offers preconfigured wowza instances. What is your budget for this project? Why not just use twitch?
Edit: You can probably string something together using ffmpeg:
https://trac.ffmpeg.org/wiki/StreamingGuide
Can anyone help me to code in WinRT for PAUSING and RESUMING recording audio files.
Classes used are as follows:
IRamdomAccessStream for storing captured voice to random stream.
Windows.Media.Capture.MediaCapture for capturing audio voice.
Windows.Storage.StorageFile for storing file.
Please help me with the code that how to pause the stream and again on resume the captured voice append to the existing stream.
waiting 4 the reply..
Thank you.
Not sure why this had been down voted. Have a look at this sample, it should get you started.
The sample linked above provides indeed a good start about how to work with audio/video capture within a WinRT app.
However, there's no scenario in the samples replying the original question:
It seems indeed there are no methods that support pausing in MediaCapture.
The only Store App available that supports pausing while recording that I can see it's Windows Sound Recorder from Microsoft.
I assume that App is written in C++ and uses a custom sink instead to record audio which also supports appending data to an existing stream.
I am currently developing a Windows Phone 7 application. What I want to do, is to render a video file (avi, wmv, whatever...) from a sequence of images. So, I just need something like a frame writer for video files (e.g. create an in-game video: write every X frame to video stream).
I searched the whole internet and also stackoverflow but I didn't find anything. As far as I know, there are a lot of APIs and interfaces in the Windows Phone 7 stack to handle audio and video, so I think there must be a solution for this somehow.
BTW: I alread had a look at C# Slicer and ffmpeg. Slicer isn't available for Windows Phone 7 and I wasn't able to port it and ffmpeg isn't allowed by Microsoft, because this solution would need to integrate an unmanaged library which isn't allowed.
I hope you can help me.
Would you consider doing this operation on a server? You could upload the images to your host, invoke ffmpeg to create the movie, then push the movie back to the phone. It might even be faster than having the phone do the encoding (assuming you don't have thousands of users hitting the host at the same time, of course!)
Do we have a way to simulate a webcam driver, that will provide realtime captured screen (30 frames per sec) as it's output?
This is one of several features of ManyCam (free). It is a virtual webcam driver through which you can stream your real webcam video (with optional real-time video effects), video or image files, or your full/partial desktop.
Yes, just google video2webcam. It works quite well and will loop a video or picture as output.
The driver's job is to provide a level of abstraction between the software and hardware. The driver is supposed to issue commands to the hardware. It's not responsible for taking pictures and turning it into an animated GIF for instance. It's going to do low level stuff like, turn the device on and off, send raw data to a socket.
That being said, if you need to create a virtual device driver. Here's an overview of VDD's. Windows Programming/Device Driver Introduction
Generally these are not written in higher-level languages such as C#. Rather, they are written in languages such as C/C++. You will need the KMDF, or Kernel-Mode Driver Framework.
If you just need to access a webcam from a .NET application on a system with a webcam, you just need an API.
Open your browser.. go to google.com and type ".NET webcam API"
You will see something like this:
Webcam in your own application
It appears that this is a wrapper for the DirectShow class.
I want to plug a live video broadcasting facility to a website. I'm planning to use silverlight and C# but I'm very newbie in this topic.
What is the best way to do this?
Thanks in advance.
Update:
I have a camera. I want it to capture it and show on my website. Live. But I have no idea about what I must do inside. Tell me everything about the capturing, streams, api's, dll's, whatever else I need to know. Thanks.
If you're on Windows, you may want to consider the free Windows Media Encoder 9 SDK (http://www.microsoft.com/windows/windowsmedia/forpros/encoder/features.aspx). The idea is that Windows Media Encoder allows you to use any device as a media source (say, a video capture card or a streaming-enabled camcorder) and to push it out as an ASF stream, or even to multicast/broadcast it via Windows Media Services.
The WME SDK is simply a programmatic (COM) interface to WME. In principle you can automate the encoder and instruct it to use your camera as a source, and push it out to a specified UDP port or publishing point (for the latter you'll need Windows Server 2003/2008 with Windows Media Services installed).
A newer alternative is Microsoft Expression Encoder SDK (http://www.microsoft.com/downloads/details.aspx?FamilyId=9A077A3D-58CE-454C-B486-153F0578BE4A&displaylang=en), which "talks" to Expression Encoder and allows you to stream out to Silverlight clients (among others), but the caveat here is that EE is not free (WME is).
do you have a live stream? Or is that the part you need? If you already have a live stream with a URI, then you could use Silverlight to display it as all you need is a player and a URI (you can get a player anywhere, here's one http://sl2videoplayer.codeplex.com).
If you don't have the stream, then what you will need is some mechanism to capture video and stream it. You can use Windows Media Server to do the streaming.
Can you clarify what pieces you have and what you need?