i am a .Net developer who has written a CMS-System for the Intranet of a specific company.
Our client has the ability to upload videos and other media there and let his employees and customers view them alongside other information.
We use a Standard Httphandler to fetch the uploaded video from HDD of the server and context.response.TransmitFile() it to the Browser.
So we can use this Handler as target for a html5 src-Attribute.
Now i've gotten a request to sort of "emulate" a videostream. The idea is that the client uplaods the video as a file, sets a specific start date from which the video should be viewable and every Request to the video should then return only the slice from the video from startdate to now.
Sort of pretending this video would be a live stream which goes forward on its own.
I tried adapting the HttpHandler to calculate the number of seconds between the startdate and the current request time, multiply it by the bitrate of the video and then simply cut off x bytes from the Stream (for example using Stream.seek) but the resulting data does not get recognized by Brwosers as a valid video stream. I guess this is because of missing (cut away) header-informations and key-frames etc.
Does anybody know a library who allows me to do this (cutting the video in slices without writing them to harddisk, i dont want to have a videofile laying around for every request thats landed on my httphandler)
The video is in mp4 format and i would liek to avoid the additional overhead of having to transcode it (like VLC requires when you use it for streaming)
Any ideas on this topic, im kinda lost...!?
Thanks in advance.
Chris
Clarification:
I do not know how much to cut off the video beforehand, that depends on the moment the stream is requested.
The Formula is easy: Date of the Request (Datetime.Now) - Configured Start time of the video. This Timespan has to be "skipped" from the start of the video.
Ideally i woudl like some library which allows me to load the file as a fileStream, skip x Seconds and write the remaining bytes/frames to the output of the httphandler. But i have no idea how to do this as VLC and FFMPEG seem so only support slicing by writing files, not giving me the sliced data as a stream...
Related
I'm making a project using c# 2013, windows forms and this project will use an IP camera to display a video for a long time using CGI Commands.
I know from the articles I've read that the return of the streaming video of the IP camera is a continuous multi-part stream. and I found some samples to display the video like this one Writing an IP Camera Viewer in C# 5.0
but I see a lot of code to extract the single part that represents a single image and displays it and so on.
Also I tried to take continuous snap shots from the camera using the following code.
HttpWebRequest req=(HttpWebRequest)WebRequest.Create("http://192.168.1.200/snap1080");
HttpWebResponse res = (HttpWebResponse)req.GetResponse();
Stream strm = res.GetResponseStream();
image.Image = Image.FromStream(strm);
and I repeated this code in a loop that remains for a second and counts the no. of snapshots that were taken in a second and it gives me a number between 88 and 114 snapshots per second
IMHO the first example that displays the video makes a lot of processing to extract the single part of the multi-part response and displays it which may be as slow as the other method of taking a continuous snapshots.
So I ask for other developers' experiences in this issue if they see other difference between the 2 methods of displaying the video. Also I want to know the effect of receiving a continuous multi-part stream on the memory is it safe or will generate an out of memory errors.
Thanks in advance
If you are taking more than 1 jpeg per 1-3 seconds, better capture H264 video stream, it will take less bandwidth and cpu.
Usually mjpeg stream is 10-20 times bigger than the same h264 stream. So 80 snapshots per second is a really big amount.
As long as you dispose of the image and stream correctly, you should not have memory issues. I have done a similar thing in the past with an IP Camera, even converting all the images that I take as a snapshot back into a video using ffmpeg (I think it was).
I have a form with a liveview of the tv-signal (from dvb-t stick). I've the sampleproject "DTViewer" from http://directshownet.sourceforge.net/about.html.
Now I try to capture the stream to a movie-file by clicking a button, but how?
I use C# and DirectShow.NET.
I tried to search in many sampleprojcets but these are made for videoinputs not a dvb-t stick with a BDA (Broadcast Driver Architecture) interface.
Help!
Don’t really know what exactly do you mean by a “movie-file”, but I can tell you how to capture the entire MUX (transport stream). Create a graph with a Microsoft DVBT Network Provider, You_Name_It BDA DVBT Tuner, You_Name_It BDA Digital Capture and MPEG-2 Demultiplexer filters. Once you connect them, enumerate all output pins on the MPEG-2 Demultiplexer and render them. Tune the frequency of your choice (put_TuneRequest). At this point everything is ready to run the graph, but don’t run it! Enumerate all filters in the graph. Disconnect all filters except Microsoft DVBT Network Provider, You_Name_It BDA DVBT Tuner and You_Name_It BDA Digital Capture. Remove all these disconnected filters from the graph except the MPEG-2 Demultiplexer (it has to be in the graph although it is not connected). Add Sample Grabber filter and NULL Renderer filter. Connect Digital Capture filter to Sample Grabber and Sample Grabber to NULL Renderer. You can run the graph now. Through the callback in Sample Grabber filter you will receive the entire MUX. Of course, there is still some work to demux the data, but once you do that, you can capture all TV programs in one MUX at once. The easiest way is to capture it in a TS format because the TS is being broadcasted (188 bytes long packets).
It seems to me VLC has BDA support (BDA.c file reference), maybe you can snoop up something from their code?
There is no simple answer to your question. I have started one such project and have found out that there is very little I know about it, so here is little something from my research.
First, you'll have to understand that dvb-t tuner card or stick doesn't give video frames in the classical sense, but the decoding is done in the pc, on the cpu. External card will provide you with compressed data only, as it fetches it from the air.
Next - data that is delivered to you will be in MPEG2 or MPEG4 Transport Stream format. Which is suitable for streaming or broadcasting, not for saving to file. VLC is able to play TS written to the file, but to record a proper video file, you'll have to either transcode the file or repack it to Program Stream. Google it a little, you'll find the differences.
More - one frequency on the air consists of many channels, and that channel packing is called 'mux'. So - from the BDA tuner/capturer you'll get ALL data, and you'll have to demux it manually or let BDA demuxer do it for you.
Hope that's enough info to get you started, I can post you some interesting links when I get to the real keyboard.
Is there a way I can capture a single frame from a video file (mpg, wmv, flv, etc) at a specific point in the video (e.g. 5 seconds or the 25th frame)? Then save it as an image?
[Edit]
Something link YouTube does. That can't be all done manually? ;)
I'd use DirectShow.NET, because it'll let you do a lot of the work in managed code, which is quite a bit more friendly than doing it in native code.
You'll have to construct a filter graph to render the file you want, and you'll also need a file reader for the format of the file (i.e. if it's an MP4 file, you'll need an MP4 demux), and you'll need a decoder for the format of video (i.e. if it's H264, you'll need an H264 decoder filter). I'd use Windows7 if possible, it has much better media support.
Your graph should look something like:
File Reader -> Video Decoder -> Sample Grabber -> Null Renderer
You'll construct your graph, and then call IMediaSeeking to seek to the approximate time of the sample you want. Then run the graph. The decompressed frames will come in through a Sample Grabber callback interface. You can check the timestamps and get the one that's closest to what you need.
From there, you can use .NET to save it as whatever image format you like (JPEG is probably best).
FFMPEG and .net is your best choice
I have tonss of videos in database and they can't be accessed directly but I can play them one by one and can record them. Now I want to write a program (probably in C#) that will get a URL and will start Internet Explorer or any other default browser to start the link. Once the link will be started, video will be playing.
Now my job is to record the video for "x" seconds along with audio. I can record the video by taking screenshots very frequently but what about audio and it's quality? Do I need to put microphone in a sound proof room attached with speaker so that I record it or I can directly pull the audio off from audio interface card before letting it toward the speakers?
Any ideas?
Umair
This is not wise at all. You will have a huge quality loss recording video from screen and re-encoding it, not to mention the time this will take.
You should find a way to access those videos directly from the database, and run them through a converter like ffmpeg.
What about using rtmpdump to retrieve the video stream as a .flv? Of course, you will need to parse the stream information from the respective web pages, but that should be manageable.
I have images being sent to my database from a remote video source at about 5 frames per second as JPEG images. I am trying to figure out how to get those images into a video format so I can stream a live video feed to Silverlight.
It seems to make sense to create a MJPEG stream but I'm having a few problems. Firstly I was trying to stream via an HTTP request so I didn't have a deal with sockets but maybe this is breaking my code.
If I try surf to my stream from QT I get a video error, Media player shows the first frame image and Silverlight crashes :)
Here is the code that streams - since I content type used this way can only be sent once I know that it isn't ideal and might be the root cause. All images are coming in via a LINQ2SQL object.
I did already try simply updating the image source of an image control in Silverlight but the flicker isn't acceptable. If Silverlight doesn't support MJPEG then no point even continuing but it looks like it does. I do have access to the h.264 frames coming in but that seemed more complicated via MP4.
Response.Clear();
Response.ContentType = "multipart/x-mixed-replace; boundary=--myboundary";
ASCIIEncoding ae = new ASCIIEncoding();
HCData data = new HCData();
var videos = (from v in data.Videos
select v).Take(50); // sample the first 50 frames
foreach (Video frame in videos)
{
byte[] boundary = ae.GetBytes("\r\n--myboundary\r\nContent-Type: image/jpeg\r\nContent-Length:" + frame.VideoData.ToArray().Length + "\r\n\r\n");
var mem = new MemoryStream(boundary);
mem.WriteTo(Response.OutputStream);
mem = new MemoryStream(frame.VideoData.ToArray());
mem.WriteTo(Response.OutputStream);
Response.Flush();
Thread.Sleep(200);
}
Thanks!
EDIT: I have the stream working in firefox so if I surf to the page I see video! but nothing else accepts the format. Not IE, SL, Media player - nothing.
I did MJPEG a long time ago (3-4 years ago) and I'm scratching my head trying to remember the details and I simply can't. But, if its possible, I would suggest finding some kind of web site that streams MJPEG content and fire up wireshark/ethereal and see what you get over the wire. My guess is you are missing some required HTTP headers that firefox is little more forgiving about.
If you can't find a sample MJPEG stream over the internet, a lot of web cams have software that give you an MJPEG stream. The app I worked on it with was a console for multiple security cameras, so I know that is a common implementation for cams of all types (if they support a web interface).
I'm far from being an expert in MJPEG streaming, but looking at the source of mjpg-streamer on sourcefourge I think you should send each frame separately, writing the boundary before and after each of them. You should of course not write the content-type in the closing boundary.
First, write your mjpeg frames out to separate files. You should then be able to open these in Phototshop (this will independently verify that you are parsing the stream correctly). If this fails, by bet is that you have HTTP headers embedded in your image data.
Have you looked at various web cam setups that exist on the net? A lot of them do some sort of low res update without flicker. You should be able to reverse engineer these types of sites for additional clues to your problem.
Some sites create a GIF animation, maybe that is an option so that the user can see the past minute or so.
About your edit: MJPEG is supported by Firefox and Safari. However other applications do not, like Explorer or Silverlight depending on what you are doing with it.