I'm making a project using c# 2013, windows forms and this project will use an IP camera to display a video for a long time using CGI Commands.
I know from the articles I've read that the return of the streaming video of the IP camera is a continuous multi-part stream. and I found some samples to display the video like this one Writing an IP Camera Viewer in C# 5.0
but I see a lot of code to extract the single part that represents a single image and displays it and so on.
Also I tried to take continuous snap shots from the camera using the following code.
HttpWebRequest req=(HttpWebRequest)WebRequest.Create("http://192.168.1.200/snap1080");
HttpWebResponse res = (HttpWebResponse)req.GetResponse();
Stream strm = res.GetResponseStream();
image.Image = Image.FromStream(strm);
and I repeated this code in a loop that remains for a second and counts the no. of snapshots that were taken in a second and it gives me a number between 88 and 114 snapshots per second
IMHO the first example that displays the video makes a lot of processing to extract the single part of the multi-part response and displays it which may be as slow as the other method of taking a continuous snapshots.
So I ask for other developers' experiences in this issue if they see other difference between the 2 methods of displaying the video. Also I want to know the effect of receiving a continuous multi-part stream on the memory is it safe or will generate an out of memory errors.
Thanks in advance
If you are taking more than 1 jpeg per 1-3 seconds, better capture H264 video stream, it will take less bandwidth and cpu.
Usually mjpeg stream is 10-20 times bigger than the same h264 stream. So 80 snapshots per second is a really big amount.
As long as you dispose of the image and stream correctly, you should not have memory issues. I have done a similar thing in the past with an IP Camera, even converting all the images that I take as a snapshot back into a video using ffmpeg (I think it was).
Related
i am a .Net developer who has written a CMS-System for the Intranet of a specific company.
Our client has the ability to upload videos and other media there and let his employees and customers view them alongside other information.
We use a Standard Httphandler to fetch the uploaded video from HDD of the server and context.response.TransmitFile() it to the Browser.
So we can use this Handler as target for a html5 src-Attribute.
Now i've gotten a request to sort of "emulate" a videostream. The idea is that the client uplaods the video as a file, sets a specific start date from which the video should be viewable and every Request to the video should then return only the slice from the video from startdate to now.
Sort of pretending this video would be a live stream which goes forward on its own.
I tried adapting the HttpHandler to calculate the number of seconds between the startdate and the current request time, multiply it by the bitrate of the video and then simply cut off x bytes from the Stream (for example using Stream.seek) but the resulting data does not get recognized by Brwosers as a valid video stream. I guess this is because of missing (cut away) header-informations and key-frames etc.
Does anybody know a library who allows me to do this (cutting the video in slices without writing them to harddisk, i dont want to have a videofile laying around for every request thats landed on my httphandler)
The video is in mp4 format and i would liek to avoid the additional overhead of having to transcode it (like VLC requires when you use it for streaming)
Any ideas on this topic, im kinda lost...!?
Thanks in advance.
Chris
Clarification:
I do not know how much to cut off the video beforehand, that depends on the moment the stream is requested.
The Formula is easy: Date of the Request (Datetime.Now) - Configured Start time of the video. This Timespan has to be "skipped" from the start of the video.
Ideally i woudl like some library which allows me to load the file as a fileStream, skip x Seconds and write the remaining bytes/frames to the output of the httphandler. But i have no idea how to do this as VLC and FFMPEG seem so only support slicing by writing files, not giving me the sliced data as a stream...
I have started a thread on this at DirectShow.NET's forum, here is the link http://sourceforge.net/projects/directshownet/forums/forum/460697/topic/5194414/index/page/1
but unfortunately the problem still persists...
I have an application that captures video from a webcam and audio from the microphone and save it to a file, for some reason the audio and video are never in-sync, i tried the following:
1. Started with ffdshow encoder and changed to AVI Mux - problem persists, audio is delayed and at the end of the video the picture remains frozen and the audio continues
2. Changed from AVI Mux to WM ASF Writer - video is frozen at the beginning (2 seconds) and rest of video is in-sync (but the two first seconds are not usable)
3. create SampleGrabber that prints the timestamp for both audio and video - saw that the audio timestamp is 500ms earlier but I have no idea what to do with this fact...
4. tried manually setting the ReferenceClock to one of the capture filters (audio/video) but both won't cast to IReferenceClock
5. Created a SystemClock and set it has the ReferenceClock - made no difference
6. Set SyncUsingStreamOffset(true) on the grap - timestamps are much closer now but the final result is the same
7. Tried saving the audio and video to two different files and used VirtualDub to see if they match, they still dont...
Oh i forgot to mention I also tried building the graph in GraphEditPlus but the problem still remains, here's a link to the graph: http://www2.picturepush.com/photo/a/8030745/img/8030745.png
Currently I am testing all my changes on the CapWMV sample from DirectShow.NET's samples.
Please any advice would be highly appreciated, I am hopeless :/
Thanks,
Eran.
Update:
It seems there's a constant 500ms gap between the audio and video, if I use virtualDub to delay the audio by 500ms it looks fine, how can set this in the graph?
You are having latency on the audio stream equal to size of capture buffer. That is, you obtain the full buffer which started being captured 0.5 seconds away. You need to use smaller buffers and/or apply offset on the buffers to adjust the latency.
See:
Minimizing audio capture latency in DirectShow
How to eliminate 1 second delay in DirectShow filter chain? (Using Delphi and DSPACK)
IAMBufferNegotiation is the keyword.
Just wanted to add the solution for my situation, maybe it will help someone.
I was trying to record video from a webcam together with audio from a microphone, video is HD (1080p) so I wanted to save an AVI file encoded in MPEG4, so I used ffshow-tryous (free Mpeg4 encoder) together with an Avi Mux Filter, the problem was that some (well most of them :) ) of my videos had sync issues.
What I discovered was that Avi Mux does not handle synchronization, it assumes the data arrives at the appropriate time (written here - http://msdn.microsoft.com/en-us/library/dd407208(v=vs.85).aspx), so I tried using WMAsfWriter which does handle synchronization and it worked fine (The 2 seconds freeze I mentioned above was just a glitch with VLC Player) but it doesn't work good with high resolutions and I had trouble using it with custom profiles (filters won't get connected).
I also tried Roman's suggestion and although the links were very interesting and promising (I really recommend reading them - can't give +1 to a post yet...) it just didn't made any difference :/
My final solution was to give up on MPEG4 and just use MPEG2, I switched from Avi Mux to Microsoft MPEG2 Encoder which works great, should have thought about that long time ago :)
Hopefully this will help someone else.
Thanks,
Eran.
I had the same problem rendering video from WMV to AVI using Xvid MPEG-4 decoder.
My final solution without giving up MPEG-4 was to configure the AviMuxer setting ConfigAviMux::SetMasterStream property
As explained in the Capturing Video to an AVI File article from MSDN configuration:
If you are capturing audio and video from two separate devices, it is a
good idea to make the audio stream the master stream. This helps to
prevent drift between the two streams, because the AVI Mux filter
adjust the playback rate on the video stream to match the audio
stream.
Example Code :
IConfigAviMux _filterAVIMuxerCfg = (IConfigAviMux)_filterAVIMuxer;
_filterAVIMuxerCfg.SetMasterStream(0); // I've add first audio ;)
I am in the process of creating a TCP remote desktop broadcasting application. (Something like Team Viewer or VNC)
the server application will
1. run on a PC listening for multiple clients on one Thread
2. and on another thread it will record the desktop every second
3. and it will broadcast the desktop for each connected client.
i need to make this application possible to run on a connections with a 12KBps upload and 50KBps download DSL connection (client's and server).
so.. i have to reduce the size of the data/image i send per second.
i tried to reduce by doing the following.
I. first i send a Bitmap frame of the desktop and each other time i send only the difference of the previously sent frame.
II. the second way i tried was, each time i send a JPEG frame.
i was unsuccessful to send a JPEG frame and then each next time send the difference of the previously sent JPEG frame.
i tried using lzma compression (7zip SDK) for the when i was transmitting the difference of the Bitmap.
But i was unsuccessful to reduce the data into 12KBps. the maximum i was able to achieve was around 50KBps.
Can someone advice me an algorithm/procedure for doing this?
What you want to do is do what image compression formats do, but in a custom way (Send only the changes, not the whole image over and over). Here is what I would do, in two phases (phase 1: get it done, prove it works, phase 2: optimize)
Proof of concept phase
1) Capture an image of the screen in bitmap format
2) Section the image into blocks of contiguous bytes. You need to play around to find out what the optimal block size is; it will vary by uplink/downlink speed.
3) Get a short hash (crc32, maybe md5, experiment with this as well) for each block
4) Compress (don't forget to do this!) and transfer each changed block (If the hash changed, the block changed and needs to be transferred). Stitch the image together at the receiving end to display it.
5) Use UDP packets for data transfer.
Optimization phase
These are things you can do to optimize for speed:
1) Gather stats and hard code transfer speed vs frame size and hash method for optimal transfer speed
2) Make a self-adjusting mechanism for #1
3) Images compress better in square areas rather then contiguous blocks of bytes, as I explained in #2 of the first phase above. Change your algorithm so you are getting a visual square area rather than sequential blocks of lines. This square method is how the image and video compression people do it.
4) Play around with the compression algorithm. This will give you lots of variables to play with (CPU load vs internet access speed vs compression algorithm choice vs frequency of screen updates)
This is basically a summary of how (roughly) compressed video streaming works (you can see the similarities with your task if you think about it), so it's not an unproven concept.
HTH
EDIT: One more thing you can experiment with: After you capture a bitmap of the screen, reduce the number of colors in it. You can save half the image size if you go from 32 bit color depth to 16 bit, for example.
I have created two DirectShow graphs. One captures from a Hauppauge HD-PVR and stores it in a StreamBufferSink. The second one uses a StreamBufferSource, sends the output to an MPEG-2 Demultiplexer, sending the video to the ArcSoft Video Decoder and on to a Video Mixing Renderer 9 set up in windowless mode.
This all works fine for previewing the data. When I use the IStreamBufferMediaSeeking.SetPositions method (getting the interface from the StreamBufferSource) to change the playback position, if I set it anywhere but at the beginning of the stream, the video freezes and stops updating. Calling GetCurrentPosition on IStreamBufferMediaSeeking shows the position is moving on the stream, but the video just doesn't follow along.
I am using C# and DirectShowLib-2005 for programming things.
Any ideas on what is wrong or how to figure out what is going wrong?
What I have discovered is the StreamBufferSink/StreamBufferSource only understand MPEG-2 or DV video. H.264 is not supported so it doesn't know how to seek within the stream and thus I cannot use this component for what I want to do unless I transcode my stream to MPEG-2 which defeats the purpose for having an H.264 stream in the first place.
Further information: This actually will work under Windows 7 with the updates to the Streaming Buffer Engine. To get rewind, I had to demux the stream and add the MPEG-2 Video Stream Analyzer filter before putting the data into the Stream Buffer Sink.
I have images being sent to my database from a remote video source at about 5 frames per second as JPEG images. I am trying to figure out how to get those images into a video format so I can stream a live video feed to Silverlight.
It seems to make sense to create a MJPEG stream but I'm having a few problems. Firstly I was trying to stream via an HTTP request so I didn't have a deal with sockets but maybe this is breaking my code.
If I try surf to my stream from QT I get a video error, Media player shows the first frame image and Silverlight crashes :)
Here is the code that streams - since I content type used this way can only be sent once I know that it isn't ideal and might be the root cause. All images are coming in via a LINQ2SQL object.
I did already try simply updating the image source of an image control in Silverlight but the flicker isn't acceptable. If Silverlight doesn't support MJPEG then no point even continuing but it looks like it does. I do have access to the h.264 frames coming in but that seemed more complicated via MP4.
Response.Clear();
Response.ContentType = "multipart/x-mixed-replace; boundary=--myboundary";
ASCIIEncoding ae = new ASCIIEncoding();
HCData data = new HCData();
var videos = (from v in data.Videos
select v).Take(50); // sample the first 50 frames
foreach (Video frame in videos)
{
byte[] boundary = ae.GetBytes("\r\n--myboundary\r\nContent-Type: image/jpeg\r\nContent-Length:" + frame.VideoData.ToArray().Length + "\r\n\r\n");
var mem = new MemoryStream(boundary);
mem.WriteTo(Response.OutputStream);
mem = new MemoryStream(frame.VideoData.ToArray());
mem.WriteTo(Response.OutputStream);
Response.Flush();
Thread.Sleep(200);
}
Thanks!
EDIT: I have the stream working in firefox so if I surf to the page I see video! but nothing else accepts the format. Not IE, SL, Media player - nothing.
I did MJPEG a long time ago (3-4 years ago) and I'm scratching my head trying to remember the details and I simply can't. But, if its possible, I would suggest finding some kind of web site that streams MJPEG content and fire up wireshark/ethereal and see what you get over the wire. My guess is you are missing some required HTTP headers that firefox is little more forgiving about.
If you can't find a sample MJPEG stream over the internet, a lot of web cams have software that give you an MJPEG stream. The app I worked on it with was a console for multiple security cameras, so I know that is a common implementation for cams of all types (if they support a web interface).
I'm far from being an expert in MJPEG streaming, but looking at the source of mjpg-streamer on sourcefourge I think you should send each frame separately, writing the boundary before and after each of them. You should of course not write the content-type in the closing boundary.
First, write your mjpeg frames out to separate files. You should then be able to open these in Phototshop (this will independently verify that you are parsing the stream correctly). If this fails, by bet is that you have HTTP headers embedded in your image data.
Have you looked at various web cam setups that exist on the net? A lot of them do some sort of low res update without flicker. You should be able to reverse engineer these types of sites for additional clues to your problem.
Some sites create a GIF animation, maybe that is an option so that the user can see the past minute or so.
About your edit: MJPEG is supported by Firefox and Safari. However other applications do not, like Explorer or Silverlight depending on what you are doing with it.