Currently developping an application where I must send a picture every X seconds to my server and the server will upload it to my FTP. To not make the picture heavy, its format is JPEG and my last image was 135Ko, which is 135000 bytes.
Usually I send packets of max 8192 bytes, but I need this picture sending mechanism in my application, so I'm here to ask you guys what would be the best way to send those 135000 bytes to my server? A fast way too.
All at once?
Slice it, 8192 bytes a piece ?
Other method that I miss?
EDIT : I use TCP
Thanks for your time.
A TCP Packet Size can go up to 64K (65535 bytes) so:
You will have Three options (yes):
Assuming you already convert your image to bytes[], send each image via two packets and then combine on the server side. You will need to watch out for their order.
Resize your image so you reach the same packet size or less than it.
Search for a library that does it for you (split an image down to several packets and combines it back together)
Related
I am creating an application which records the desktop screen and sends it over a network. I was using TCP and it was working, however there was huge frame stutters even when doing it on the same machine. When the screen would change it would require more data to be sent, this usually caused the TCP client to take an abornal amount of time sending the data
Client
byte[] encoded = Encoder.Encode(frame); // Takes in a bitmap image and only keeps
// the part of the image that has changed
byte[] compressed = DataCompressor.Compress(encoded); // GZIP Compresses image
byte[][] slices = ByteManipulator.SliceBytes(compressed, 15000); // Divides the
// image slices that are 15000 bytes in length
foreach (byte[] slice in slices)
{
SendTo(slice, HostIPEP); // Sends to the server (The main issue)
}
The issue with the above code is that data does not receive in the correct order, due to it being UDP. How does one get around this issue to stream video like this over UDP?
Well, the stutters come from the huge amount of data to transfer, at the begging, and when something change in screen content.
I think if you use professional encoder for image(screen content), it should be much better, for example, x264/x265/av1 encoder, especially AV1 Real-Time Screen Content Coding.
For the transport, recommend to use WebRTC server, such as SRS or MediaSoup etc, please read more detail from this post.
If use C# to build the app, there is also some native binding to use WebRTC in C#.
I'm making a project using c# 2013, windows forms and this project will use an IP camera to display a video for a long time using CGI Commands.
I know from the articles I've read that the return of the streaming video of the IP camera is a continuous multi-part stream. and I found some samples to display the video like this one Writing an IP Camera Viewer in C# 5.0
but I see a lot of code to extract the single part that represents a single image and displays it and so on.
Also I tried to take continuous snap shots from the camera using the following code.
HttpWebRequest req=(HttpWebRequest)WebRequest.Create("http://192.168.1.200/snap1080");
HttpWebResponse res = (HttpWebResponse)req.GetResponse();
Stream strm = res.GetResponseStream();
image.Image = Image.FromStream(strm);
and I repeated this code in a loop that remains for a second and counts the no. of snapshots that were taken in a second and it gives me a number between 88 and 114 snapshots per second
IMHO the first example that displays the video makes a lot of processing to extract the single part of the multi-part response and displays it which may be as slow as the other method of taking a continuous snapshots.
So I ask for other developers' experiences in this issue if they see other difference between the 2 methods of displaying the video. Also I want to know the effect of receiving a continuous multi-part stream on the memory is it safe or will generate an out of memory errors.
Thanks in advance
If you are taking more than 1 jpeg per 1-3 seconds, better capture H264 video stream, it will take less bandwidth and cpu.
Usually mjpeg stream is 10-20 times bigger than the same h264 stream. So 80 snapshots per second is a really big amount.
As long as you dispose of the image and stream correctly, you should not have memory issues. I have done a similar thing in the past with an IP Camera, even converting all the images that I take as a snapshot back into a video using ffmpeg (I think it was).
I am using directshow sample grabber in order to take pictures with rate of 25 fps from a web cam. Using pic resolution of 640x480. Pic size is around the 25500 bytes after converting it to jpeg. I am sending the frame using the rtp protocol. Also sending voice encoded with g711 with rtp protocol on different port. I am struggling a delay issue with the video from time to time. Maybe the jpeg size is too big? Do I need some how to compress the to mjpeg before sending?
When I recieve the frame on the client side, I am showing it in a picturebox. Changing the pictrue in the picturebox in small period of time give us the illusion of video.
Is this the right way?
https://net7mma.codeplex.com/ has an implementation of this, you would use the RtspServer and just put the new images in a directory and use the class RFC2435Stream which does this for you by monitoring that directory.
I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).
I am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?
So far I have decided:
to use YUV colorspace instead of RGB
to use UDP protocol instead of TCP/IP
Is there anything else I could do to get the maximum fps possible?
This might be quite a bit of work but if your client can handle the computations in real time you could use the same method that video encoders use. Send a key frame every say 5 frames and in between only send the information that changed not the whole frame. I don't know the details of how this is done, but try Googling p-frames or video compression.
Compress the difference between successive images. Add some checksum. Provide some way for the receiver to request full image data for the case where things get out of synch.
There are probably a host of protocols doing that already.
So, search for live video stream protocols.
Cheers & hth.,
I am in the process of creating a TCP remote desktop broadcasting application. (Something like Team Viewer or VNC)
the server application will
1. run on a PC listening for multiple clients on one Thread
2. and on another thread it will record the desktop every second
3. and it will broadcast the desktop for each connected client.
i need to make this application possible to run on a connections with a 12KBps upload and 50KBps download DSL connection (client's and server).
so.. i have to reduce the size of the data/image i send per second.
i tried to reduce by doing the following.
I. first i send a Bitmap frame of the desktop and each other time i send only the difference of the previously sent frame.
II. the second way i tried was, each time i send a JPEG frame.
i was unsuccessful to send a JPEG frame and then each next time send the difference of the previously sent JPEG frame.
i tried using lzma compression (7zip SDK) for the when i was transmitting the difference of the Bitmap.
But i was unsuccessful to reduce the data into 12KBps. the maximum i was able to achieve was around 50KBps.
Can someone advice me an algorithm/procedure for doing this?
What you want to do is do what image compression formats do, but in a custom way (Send only the changes, not the whole image over and over). Here is what I would do, in two phases (phase 1: get it done, prove it works, phase 2: optimize)
Proof of concept phase
1) Capture an image of the screen in bitmap format
2) Section the image into blocks of contiguous bytes. You need to play around to find out what the optimal block size is; it will vary by uplink/downlink speed.
3) Get a short hash (crc32, maybe md5, experiment with this as well) for each block
4) Compress (don't forget to do this!) and transfer each changed block (If the hash changed, the block changed and needs to be transferred). Stitch the image together at the receiving end to display it.
5) Use UDP packets for data transfer.
Optimization phase
These are things you can do to optimize for speed:
1) Gather stats and hard code transfer speed vs frame size and hash method for optimal transfer speed
2) Make a self-adjusting mechanism for #1
3) Images compress better in square areas rather then contiguous blocks of bytes, as I explained in #2 of the first phase above. Change your algorithm so you are getting a visual square area rather than sequential blocks of lines. This square method is how the image and video compression people do it.
4) Play around with the compression algorithm. This will give you lots of variables to play with (CPU load vs internet access speed vs compression algorithm choice vs frequency of screen updates)
This is basically a summary of how (roughly) compressed video streaming works (you can see the similarities with your task if you think about it), so it's not an unproven concept.
HTH
EDIT: One more thing you can experiment with: After you capture a bitmap of the screen, reduce the number of colors in it. You can save half the image size if you go from 32 bit color depth to 16 bit, for example.