I've recently learned about data streaming and C#. I've practice building a simple server-client Login program like so:
public static TcpClient SocketCLiente = new TcpClient();
NetworkStream _serverStream = default(NetworkStream);
public static string IPServer="127.0.0.1";
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(this._txtUserName.Text + "%" + this.txtPassword.Text + "$");
_serverStream.Write(outStream, 0, outStream.Length);
This works great with text, but what if i want to stream a video to a client? How can I stream a live video from my webcam to a c# application over Lan?
Like the comment mentioned you should check out How can I stream webcam video with C#?. In general all you stream out are bytes. As long as those bytes are formatted in a way that a player, such as flash, or silverlight, or whatever, can understand then it can play the video back.
Now I don't think that post really covers the internet streaming aspect. It really touches more on capturing and playing back locally. Thats what directshow is good at. It abstracts away all the driver nonsense that you used to have to deal with to capture video and audio frame data.
Streaming to the internet, short of rolling your own streaming server, is usually accomplished by hooking into something like red-5, wowza, adobe FMS, rtsp, silverlight streaming etc.
I would use one of those solutions to hook into wherever you are getting your video data from. The gist is that you capture video/audio packets from directshow and then funnel those packets (formatted however the solution wants) to one of these streaming platforms.
Another option, though it is a little more "hackish", would be to stream your video as motion jpeg. This wouldn't let you send audio though. Motion jpeg basically tells the client to keep a connection open and constantly download a new image and replace the old one. It's like a flipbook for web. The advantage here is you can roll your own motion jpeg code pretty easily and it'd be a fun little project.
Related
I'm creating an video chat application but I'm having issues streaming microphone audio. I have video streams working already but I was hoping to find out the best method of capturing a laptop's built in microphone and stream it. At the moment I'm using a NetworkStream for sending the video. I have limited experience with NAudio (http://naudio.codeplex.com/) but every example of microphone capture doesn't seem to include a New Audio Frame event (which is my method for sending video frames).
I've been looking at http://voicerecorder.codeplex.com/ but it seems to be more than I need and doesn't cover streaming.
How do I capture microphone audio and stream it, if possible without the overhead of saving the audio to a file? I'd appreciate a simple example.
Create a new WaveIn object
call StartRecording
in the DataAvailable event handler, transmit args.BytesRecorded bytes from args.Buffer across the network.
Note that this will mean you are transmitting PCM, which is not very efficient. Normally for network streaming you would use a codec. In the NAudioDemo source code there is a Network Chat demo showing this in action.
I have a video server with an IP:192.168.1.XX
It has 3 possible formats JPEG, MPEG-4 or H.264
The video server is broadcasting a video(with audio) on real time
I have no problems streaming the video with AFORGE lib
but i also need to stream the audio
the video server has several protocols: HTTP,RTSP,RTP,RTCP
according to the user's manual RTSP is the protocol I should use to get MPEG-4(Audio and video), but I haven't found anything to stream by RTSP on C# so I'm trying to stream audio and video separate
the ports are:
RTSP: 554
RTP(Video): 5556
RTP(Audio):5558
RTCP(Video): 5557
RTCP(Audio): 5559
Does any body know how RTP works or how can I get the sound from the video server?
I would learn gstreamer. I am assuming that you are using windows since you are doing this in C#. It has a fairly stable windows port with a nice .net wrapper. If you aren't using Windows, then gstreamer is most certainly your best bet.
In gstreamer you would most likely use a pipeline like:
your video src -> x264enc or ffenc_mpv4 -> rtph264pay or rtpmp4vpay -> udpsink
your audio src -> ffenc_aac or preferably a lower latency codec like mULaw -> rtppay -> udpsink
and so on. It is very easy to use. They even have a nice rtpbin for your to use if you want to actually manage an rtp session.
More information can be found here:
http://gstreamer.freedesktop.org/
Here is a nice sample of how to do rtp:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-gstrtpbin.html
I have done this sort of thing with the direct show filters but it is much more involved. You usually have to manually handle the rtp payloading and the transport--not to mention deal with COM--whereas GStreamer provides those mechanisms for you out of the box.
You can use https://net7mma.codeplex.com/
It is a C# Media Server and it will get you each RtpPacket and from there you can get them to a Decoder of your choice among other things all without bogging down the source stream.
I have a two-way audio chat written in C#. Means: There is a server application, that sends wave-encoded audio as a byte array over UDP. The client-application then decodes and plays the audio. So this works fine. For recording and encoding/ decoding, I use NAudio (library).
Now, the task is a different one, to display the stream on a website. I would prefer to use ASP.NET, so I can receive and decode the string with C# and NAudio. I display a cam image there as well, which works smoothly and without any problem.
Still I don't know how to do this.
Can the UDP-bytestream just be decoded and played? I don't think this is possible, or could javascript be used instead of C#? HTML5 audio tag could also be useful.
Would be very happy to receive answers.
Since you mention you're using ASP.Net, it sounds like you already have a Microsoft server stack. You might look into Windows Media Services and see how you might be able to encode your audio/video into a WM stream and then embed that on your ASP.Net page.
How i can get stream from Ip Camera, Its using RTP, stream is MPEG4, i have multicast address and port,and i have ip camera's IP address and Port Number. And I cant reach via http forexample (http://ip/jpeg) And I cant reach stream with VLC Player too. forexample (rtp://ipadressofcam:port) and (rtp://multicastaddress:port)
So What is ffmpeg command of that?
I have windows OS, I only write code with C# right now.
But producer created their own ocx which used for viewing cam,that plugin can work on .net but i dont want to use it becouse it doesnt have much funcionality, I mean you cant get current picture or snapshot of cams, thats why i have to do it myself.
Are you sure the stream is not password-protected?
Try to see why your camera rejects VLC requests. And this is how to do it: Install Wireshark, start it, and put a filter for the camera address, like: ip.addr == camera_ip (xxxx.xxxx.xxxx.xxxx). Then, try to connect with VLC, and look there at the messages exchanged. If you see a DROP, UNAUTH, or something that tells you to use a passwd, introduce it. (VLC will first try to connect using an unauthenticated procedure, and if it fails, will ask for a passwd.)
Wireshark will give you clues if the failure reason is something different.
And keep in mind that if VLC can't access it, the chance to find some other way to do it is almost zero. Unless you're a video guru.
Hope it helps!
You can use VLC for such a thing ,and it's ActiveX control which is available for .NET also ,just need to install VLC Media Player and you can set it's control on VS toolbox
http://forum.videolan.org/viewtopic.php?f=32&t=54969
UPDATE
If you are ready to pay for this stuff you can use http://www.mainconcept.com/products/sdks/video.html this Company product's to advance with Decoding and Encoding ,where you can find a huge Library .
So you want to receive video stream from camera and convert individual frame into JPEG file. This sounds good and actually sounds natural: why not? there is a video feed being sent on network and we don't need much, just to pick individual frame.
The original stream is MPEG-4 (Part 2) and desired target encoding is JPEG. This breaks the task into parts of getting MPEG-4 video feed, decoding it into uncompressed images, and encoding into JPEG. Let us go through these from the last one backwards.
JPEG is a popular encoding and there are various codecs with different interface capable of compressing into JPEG. Options include GDI+ API, IJG JPEG library libjpeg, JPEG libraries and codecs for video with DirectShow and other interfaces.
MPEG-4 decoder is a complicated component, however is luckily well standardized and available in different interfaces and from several sources. In particular MPEG-4 Part 2 Video Decoder is shipped as DMO (and also through ) with Windows 7. Decoder is also available as DirectShow filter in ffdshow, Intel Media SDK.
Back to the first step, you need to reach MPEG-4 stream from network. First of all you indicated that the camera is available on multi-cast UDP address. This makes you open a socket and put into onto multi-cast group in order to start receiving RTP data. The data you would receive is not pure MPEG-4 yet, and is formatted according to RTP Payload Format for MPEG-4 Audio/Visual Streams and as you receive RTP stream of messages you will have to be prepared to receive out of order fragments, packet losses etc. You are supposed to receive what you can receive, check the RTP headers, and do your best in reconstructing MPEG-4 feed.
Eventually you need to cover these steps (not necessarily directly, you would rather use third party libraries including those mentioned in other answers) from your application and build a pipeline which stitches together receiving from network and transcoding.
So until now, I saw VLC cant open it but if we create .sdp file and play it with ffplay only a black screen appears.
vlc -vvv dshow:// --sout-keep --sout-all --sout=#rtp{dst=multicastaddress,port=portNo,sdp=file:///C:/test/my.sdp}
Of course this is not pure solution but there is little bit hope.
But obsolute solution can be like that;
There is no way to use just c# to achive what i intend. There are few c++ library components that i can use, but i can use it along with Managed c++ to write interop services and use those dlls in my c# code.Here is what i need:
1-I need an RTP Library, http://research.edm.uhasselt.be/~jori/page/index.php?n=CS.Jrtplib or live555 but live555 has certain limitations.
2-RTP Library is the transport for the MPEG4 stream i pull from my encoder. But i need ways to control the stream, those are provided producer's WSDLs.
3-After that i need a decoder; Libavcodec, I can use libavcodec to convert an Iframe to a jpeg image in .net.
I have images being sent to my database from a remote video source at about 5 frames per second as JPEG images. I am trying to figure out how to get those images into a video format so I can stream a live video feed to Silverlight.
It seems to make sense to create a MJPEG stream but I'm having a few problems. Firstly I was trying to stream via an HTTP request so I didn't have a deal with sockets but maybe this is breaking my code.
If I try surf to my stream from QT I get a video error, Media player shows the first frame image and Silverlight crashes :)
Here is the code that streams - since I content type used this way can only be sent once I know that it isn't ideal and might be the root cause. All images are coming in via a LINQ2SQL object.
I did already try simply updating the image source of an image control in Silverlight but the flicker isn't acceptable. If Silverlight doesn't support MJPEG then no point even continuing but it looks like it does. I do have access to the h.264 frames coming in but that seemed more complicated via MP4.
Response.Clear();
Response.ContentType = "multipart/x-mixed-replace; boundary=--myboundary";
ASCIIEncoding ae = new ASCIIEncoding();
HCData data = new HCData();
var videos = (from v in data.Videos
select v).Take(50); // sample the first 50 frames
foreach (Video frame in videos)
{
byte[] boundary = ae.GetBytes("\r\n--myboundary\r\nContent-Type: image/jpeg\r\nContent-Length:" + frame.VideoData.ToArray().Length + "\r\n\r\n");
var mem = new MemoryStream(boundary);
mem.WriteTo(Response.OutputStream);
mem = new MemoryStream(frame.VideoData.ToArray());
mem.WriteTo(Response.OutputStream);
Response.Flush();
Thread.Sleep(200);
}
Thanks!
EDIT: I have the stream working in firefox so if I surf to the page I see video! but nothing else accepts the format. Not IE, SL, Media player - nothing.
I did MJPEG a long time ago (3-4 years ago) and I'm scratching my head trying to remember the details and I simply can't. But, if its possible, I would suggest finding some kind of web site that streams MJPEG content and fire up wireshark/ethereal and see what you get over the wire. My guess is you are missing some required HTTP headers that firefox is little more forgiving about.
If you can't find a sample MJPEG stream over the internet, a lot of web cams have software that give you an MJPEG stream. The app I worked on it with was a console for multiple security cameras, so I know that is a common implementation for cams of all types (if they support a web interface).
I'm far from being an expert in MJPEG streaming, but looking at the source of mjpg-streamer on sourcefourge I think you should send each frame separately, writing the boundary before and after each of them. You should of course not write the content-type in the closing boundary.
First, write your mjpeg frames out to separate files. You should then be able to open these in Phototshop (this will independently verify that you are parsing the stream correctly). If this fails, by bet is that you have HTTP headers embedded in your image data.
Have you looked at various web cam setups that exist on the net? A lot of them do some sort of low res update without flicker. You should be able to reverse engineer these types of sites for additional clues to your problem.
Some sites create a GIF animation, maybe that is an option so that the user can see the past minute or so.
About your edit: MJPEG is supported by Firefox and Safari. However other applications do not, like Explorer or Silverlight depending on what you are doing with it.