I am using directshow sample grabber in order to take pictures with rate of 25 fps from a web cam. Using pic resolution of 640x480. Pic size is around the 25500 bytes after converting it to jpeg. I am sending the frame using the rtp protocol. Also sending voice encoded with g711 with rtp protocol on different port. I am struggling a delay issue with the video from time to time. Maybe the jpeg size is too big? Do I need some how to compress the to mjpeg before sending?
When I recieve the frame on the client side, I am showing it in a picturebox. Changing the pictrue in the picturebox in small period of time give us the illusion of video.
Is this the right way?
https://net7mma.codeplex.com/ has an implementation of this, you would use the RtspServer and just put the new images in a directory and use the class RFC2435Stream which does this for you by monitoring that directory.
Related
I am creating an application which records the desktop screen and sends it over a network. I was using TCP and it was working, however there was huge frame stutters even when doing it on the same machine. When the screen would change it would require more data to be sent, this usually caused the TCP client to take an abornal amount of time sending the data
Client
byte[] encoded = Encoder.Encode(frame); // Takes in a bitmap image and only keeps
// the part of the image that has changed
byte[] compressed = DataCompressor.Compress(encoded); // GZIP Compresses image
byte[][] slices = ByteManipulator.SliceBytes(compressed, 15000); // Divides the
// image slices that are 15000 bytes in length
foreach (byte[] slice in slices)
{
SendTo(slice, HostIPEP); // Sends to the server (The main issue)
}
The issue with the above code is that data does not receive in the correct order, due to it being UDP. How does one get around this issue to stream video like this over UDP?
Well, the stutters come from the huge amount of data to transfer, at the begging, and when something change in screen content.
I think if you use professional encoder for image(screen content), it should be much better, for example, x264/x265/av1 encoder, especially AV1 Real-Time Screen Content Coding.
For the transport, recommend to use WebRTC server, such as SRS or MediaSoup etc, please read more detail from this post.
If use C# to build the app, there is also some native binding to use WebRTC in C#.
i am a .Net developer who has written a CMS-System for the Intranet of a specific company.
Our client has the ability to upload videos and other media there and let his employees and customers view them alongside other information.
We use a Standard Httphandler to fetch the uploaded video from HDD of the server and context.response.TransmitFile() it to the Browser.
So we can use this Handler as target for a html5 src-Attribute.
Now i've gotten a request to sort of "emulate" a videostream. The idea is that the client uplaods the video as a file, sets a specific start date from which the video should be viewable and every Request to the video should then return only the slice from the video from startdate to now.
Sort of pretending this video would be a live stream which goes forward on its own.
I tried adapting the HttpHandler to calculate the number of seconds between the startdate and the current request time, multiply it by the bitrate of the video and then simply cut off x bytes from the Stream (for example using Stream.seek) but the resulting data does not get recognized by Brwosers as a valid video stream. I guess this is because of missing (cut away) header-informations and key-frames etc.
Does anybody know a library who allows me to do this (cutting the video in slices without writing them to harddisk, i dont want to have a videofile laying around for every request thats landed on my httphandler)
The video is in mp4 format and i would liek to avoid the additional overhead of having to transcode it (like VLC requires when you use it for streaming)
Any ideas on this topic, im kinda lost...!?
Thanks in advance.
Chris
Clarification:
I do not know how much to cut off the video beforehand, that depends on the moment the stream is requested.
The Formula is easy: Date of the Request (Datetime.Now) - Configured Start time of the video. This Timespan has to be "skipped" from the start of the video.
Ideally i woudl like some library which allows me to load the file as a fileStream, skip x Seconds and write the remaining bytes/frames to the output of the httphandler. But i have no idea how to do this as VLC and FFMPEG seem so only support slicing by writing files, not giving me the sliced data as a stream...
I'm making a project using c# 2013, windows forms and this project will use an IP camera to display a video for a long time using CGI Commands.
I know from the articles I've read that the return of the streaming video of the IP camera is a continuous multi-part stream. and I found some samples to display the video like this one Writing an IP Camera Viewer in C# 5.0
but I see a lot of code to extract the single part that represents a single image and displays it and so on.
Also I tried to take continuous snap shots from the camera using the following code.
HttpWebRequest req=(HttpWebRequest)WebRequest.Create("http://192.168.1.200/snap1080");
HttpWebResponse res = (HttpWebResponse)req.GetResponse();
Stream strm = res.GetResponseStream();
image.Image = Image.FromStream(strm);
and I repeated this code in a loop that remains for a second and counts the no. of snapshots that were taken in a second and it gives me a number between 88 and 114 snapshots per second
IMHO the first example that displays the video makes a lot of processing to extract the single part of the multi-part response and displays it which may be as slow as the other method of taking a continuous snapshots.
So I ask for other developers' experiences in this issue if they see other difference between the 2 methods of displaying the video. Also I want to know the effect of receiving a continuous multi-part stream on the memory is it safe or will generate an out of memory errors.
Thanks in advance
If you are taking more than 1 jpeg per 1-3 seconds, better capture H264 video stream, it will take less bandwidth and cpu.
Usually mjpeg stream is 10-20 times bigger than the same h264 stream. So 80 snapshots per second is a really big amount.
As long as you dispose of the image and stream correctly, you should not have memory issues. I have done a similar thing in the past with an IP Camera, even converting all the images that I take as a snapshot back into a video using ffmpeg (I think it was).
I'm developing a C# video streaming application.
At the moment I was able to capture video frames using opencv and encode it using ffmpeg and also capture and encode
audio using ffmpeg and save data in to queue.
The problem that I am facing is that when I send both streams simultaneously I am losing video stream.That means I stream it by first sending a video packet and then an audio packet. But, the player identifies video stream first and start to play. And did not play audio packets.
Can any one give a method to do this task?
It will be greatly appreciated.
I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).
I am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?
So far I have decided:
to use YUV colorspace instead of RGB
to use UDP protocol instead of TCP/IP
Is there anything else I could do to get the maximum fps possible?
This might be quite a bit of work but if your client can handle the computations in real time you could use the same method that video encoders use. Send a key frame every say 5 frames and in between only send the information that changed not the whole frame. I don't know the details of how this is done, but try Googling p-frames or video compression.
Compress the difference between successive images. Add some checksum. Provide some way for the receiver to request full image data for the case where things get out of synch.
There are probably a host of protocols doing that already.
So, search for live video stream protocols.
Cheers & hth.,