I am working on a simple program that grabs image from a remote IP camera. After days of research, I was able to extract JPEG images from MJPEG live stream with sample codes I got.
I did a prototype with using Windows Form. With Windows Form, I receive appropriately 80 images every 10 second from the IP camera.
Now I ported the code to Unity3D and I get about 2 frames every 10 seconds.
So basically about 78 Images are not received.
The thing looks like medieval PowerPoint slide show.
I am running the function in new Thread just like I did in the Windows Form. I first thought that the problem in Unity is because I was displaying the image, but it wasn't.
I removed the code that displays the Image as a texture and used an integer to count the number of images received. Still, I get about 2 to 4 images every 10 seconds. Meaning in the Windows Form App, I get about 80 to 100 images every 10 seconds.
Receiving 2 images in 10 seconds in Unity is unacceptable for what I am doing. The code I wrote doesn't seem to be the problem because it works great in Windows Form.
Things I've Tried:
I though the problem is from the Unity3D Editor run-time, so I called for Windows 10 64bit and ran it but that didn't solve the problem.
Changed the Scripting Backend from Mono2x to IL2CPP but the problem still remains.
Changed the Api compatibility Level from .NET 2.0 to .NET 2.0 Subset and nothing changed.
Below is a simple function I that is having that problem. It runs too slow on Unity even though I called it from another thread.
bool keepRunning = true;
private void Decode_MJPEG_Images(string streamTestURL = null)
{
keepRunning = true;
streamTestURL = "http://64.122.208.241:8000/axis-cgi/mjpg/video.cgi?resolution=320x240"; //For Testing purposes only
// create HTTP request
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(streamTestURL);
// get response
WebResponse resp = req.GetResponse();
System.IO.Stream imagestream = resp.GetResponseStream();
const int BufferSize = 5000000;
byte[] imagebuffer = new byte[BufferSize];
int a = 2;
int framecounter = 0;
int startreading = 0;
byte[] start_checker = new byte[2];
byte[] end_checker = new byte[2];
while (keepRunning)
{
start_checker[1] = (byte)imagestream.ReadByte();
end_checker[1] = start_checker[1];
//This if statement searches for the JPEG header, and performs the relevant operations
if (start_checker[0] == 0xff && start_checker[1] == 0xd8)// && Reset ==0)
{
Array.Clear(imagebuffer, 0, imagebuffer.Length);
//Rebuild jpeg header into imagebuffer
imagebuffer[0] = 0xff;
imagebuffer[1] = 0xd8;
a = 2;
framecounter++;
startreading = 1;
}
//This if statement searches for the JPEG footer, and performs the relevant operations
if (end_checker[0] == 0xff && end_checker[1] == 0xd9)
{
startreading = 0;
//Write final part of JPEG header into imagebuffer
imagebuffer[a] = start_checker[1];
System.IO.MemoryStream jpegstream = new System.IO.MemoryStream(imagebuffer);
Debug.Log("Received Full Image");
Debug.Log(framecounter.ToString());
//Display Image
}
//This if statement fills the imagebuffer, if the relevant flags are set
if (startreading == 1 && a < BufferSize)
{
imagebuffer[a] = start_checker[1];
a++;
}
//Catches error condition where a = buffer size - this should not happen in normal operation
if (a == BufferSize)
{
a = 2;
startreading = 0;
}
start_checker[0] = start_checker[1];
end_checker[0] = end_checker[1];
}
resp.Close();
}
Now I am blaming HttpWebRequest for this problem. Maybe it was poorly implemented in Unity. Not sure....
What's going on? Why is this happening? How can I fix it?
Is it perhaps the case that one has to use Read[a lot] instead of Read ??
Read[a lot]:
https://msdn.microsoft.com/en-us/library/system.io.stream.read(v=vs.110).aspx
Return Value Type: System.Int32 The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available
Conceivably, ReadAsync could help, manual, although it results in wildly different code.
I'm a bit puzzled as to what part of your code you are saying has the performance problem - its it displaying the MPG or is it the snippet of code you've published here ? Assuming the HttpRequest isn't your problem (which you can easily test in Fiddler to see how long the call and fetch actually take) then I'm guessing your problem is in the display of the MPG not the code you've posted (which wont be different between WinForms and Unity)
My guess is, if the problem is in Unity you are passing the created MemoryStream to unity to create a graphics resource ? Your code looks like it reads the stream and when it hits the end-of-image character it creates a new MemoryStream which contains the entire data buffer content. This may be a problem for Unity that isn't a problem in WinForms - the memory stream seems to contain the whole buffer you created, but this is bigger than the actual content you read - does Unity perhaps see this as a corrupted Jpg ?
Try using the MemoryStream constructor that takes an byte range from your byte[] and pass through just the data you know make up your image stream.
Other issues in the code might be (but unlikely to be performance related); Large Object Heap fragmentation from the creation and discard of the large byte[]; non dynamic storage of the incoming stream (fixed destination buffer size); no checking of incoming stream size, or end-of-stream indicators (if the response stream does not contain the whole image stream, there doesnt seem to be a strategy to deal with it).
Related
I have an Esp32 with a camera attached and want to send the images over tcp. I tried it using WifiClient (The standard tcp client implementation for wifi esp, i think). But when I am sending the image using client.write, only the first few thousand bytes are actually received by my C# server (I am for now writing the image to a file where I can see that basically the whole file is just null bytes). The total image is always around 90kb large, but I thought the TCP implementation would automatically split it into multiple packets. I then have tried to split it into multiple packets my self (splitting after 1000 bytes) and I was able to open the file (before all gallery programs said it was an unknown format), but it heavily impacted performance. I know the images are fine, since when I print them in hex over Serial and convert them to images, they work.
Here is the (simplified) code for the Camera module:
info[0] = (uint8_t)_jpg_buf_len;
info[1] = (uint8_t)((_jpg_buf_len & 0xFF00) >> 8);
info[2] = (uint8_t)((_jpg_buf_len & 0xFF0000) >> 16);
if(client.write(info, 3) < 0) fatal("Error writing the image size packet.");
if(client.write(_jpg_buf, _jpg_buf_len) < 0) fatal("Error writing image packet.");
And here is the part of the C# TCP Server that receives the packets:
NetworkStream stream = c.GetStream();
//The info buffer first written by the ESP containing the lowest 3 bytes of the image size integer.
byte[] buffInfo = new byte[4];
//Writes the lowest 3 bytes into the first 3 places in the buffer, since the highest byte is always 0
await s.ReadAsync(buffInfo , 0, 3);
int imgLen = (int)BitConverter.ToUInt32(buffInfo , 0);
//This always displays the correct lengths
Console.WriteLine("Received image with a length of {0}.", imgLen);
byte[] buffImg = new byte[imLen];
await s.ReadAsync(buffImg, 0, imLen);
//When the buffer is now written to a file, basically the whole image is null bytes and it cannot be viewed
Do I need to be manually splitting the huge buffers up? Or is there a more performant solution to this problem?
I found the issue. The esp is actually splitting the packets, but since the esp is way slower than the client, the client receives them as singular packets and when it tries to read the whole image, it cannot read the full length of bytes and fills the rest with null bytes. Then it tries to read the next image length, but the bytes read still belong to the image (but they just arrived now) and it gets really weird sizes. Here is my code I use now:
NetworkStream s = c.GetStream();
byte[] buffInfo= new byte[4];
await s.ReadAsync(buffInfo, 0, 3);
int imgLen= (int)BitConverter.ToUInt32(buffInit, 0);
byte[] buffImg = new byte[imgLen];
int remLen = imgLen;
while (remLen > 0) {
remLen -= await s.ReadAsync(buffImg, imgLen- remLen, remLen);
}
I am reading files into an array; here is the relevant code; a new DiskReader is created for each file and path is determined using OpenFileDialog.
class DiskReader{
// from variables section:
long MAX_STREAM_SIZE = 300 * 1024 * 1024; //300 MB
FileStream fs;
public Byte[] fileData;
...
// Get file size, check it is within allowed size (MAX)STREAM_SIZE), start process including progress bar.
using (fs = File.OpenRead(path))
{
if (fs.Length < MAX_STREAM_SIZE)
{
long NumBytes = (fs.Length < MAX_STREAM_SIZE ? fs.Length : MAX_STREAM_SIZE);
updateValues[0] = (NumBytes / 1024 / 1024).ToString("#,###.0");
result = LoadData(NumBytes);
}
else
{
// Need for something to handle big files
}
if (result)
{
mainForm.ShowProgress(true);
bw.RunWorkerAsync();
}
}
...
bool LoadData(long NumBytes)
{
try
{
fileData = new Byte[NumBytes];
fs.Read(fileData, 0, fileData.Length);
return true;
}
catch (Exception e)
{
return false;
}
}
The first time I run this, it works fine. The second time I run it, sometimes it works fine, most times it throws an System.OutOfMemoryException at
[Edit:
"first time I run this" was a bad choice of words, I meant when I start the programme and open a file is fine, I get the problem when I try to open a different file without exiting the programme. When I open the second file, I am setting the DiskReader to a new instance which means the fileData array is also a new instance. I hope that makes it clearer.]
fileData = new Byte[NumBytes];
There is no obvious pattern to it running and throwing an exception.
I don't think it's relevant, but although the maximum file size is set to 300 MB, the files I am using to test this are between 49 and 64 MB.
Any suggestions on what is going wrong here and how I can correct it?
If the exception is being thrown at that line only, then my guess is that you've got a problem somewhere else in your code, as the comments suggest. Reading the documentation of that exception here, I'd bet you call this function one too many times somewhere and simply go over the limit on object length in memory, since there don't seem to be any problem spots in the code that you posted.
The fs.Length property requires the whole stream to be evaluated, hence to read the file anyway. Try doing something like
byte[] result;
if (new FileInfo(path).Length < MAX_STREAM_SIZE)
{
result = File.ReadAllBytes(path);
}
Also depending on your needs, you might avoid using byte array and read the data directly from the file stream. This should have much lower memory footprint
If I understand well what you want to do, I have this proposal: The best option is to allocate one static array of defined MAX size at the beginning. And then keep that array, only fill it with a new data from another file. This way your memory should be absolutely fine. You just need to store file size in a separate variable, because the array will have always the same MAX size.
This is a common approach in systems with automatic memory management - it makes the program faster when you allocate a constant size of memory at the start and then never allocate anything during the computation, because garbage collector is not run many times.
I have a TcpClient class on a client and server setup on my local machine. I have been using the Network stream to facilitate communications back and forth between the 2 successfully.
Moving forward I am trying to implement compression in the communications. I've tried GZipStream and DeflateStream. I have decided to focus on DeflateStream. However, the connection is hanging without reading data now.
I have tried 4 different implementations that have all failed due to the Server side not reading the incoming data and the connection timing out. I will focus on the two implementations I have tried most recently and to my knowledge should work.
The client is broken down to this request: There are 2 separate implementations, one with streamwriter one without.
textToSend = ENQUIRY + START_OF_TEXT + textToSend + END_OF_TEXT;
// Send XML Request
byte[] request = Encoding.UTF8.GetBytes(textToSend);
using (DeflateStream streamOut = new DeflateStream(netStream, CompressionMode.Compress, true))
{
//using (StreamWriter sw = new StreamWriter(streamOut))
//{
// sw.Write(textToSend);
// sw.Flush();
streamOut.Write(request, 0, request.Length);
streamOut.Flush();
//}
}
The server receives the request and I do
1.) a quick read of the first character then if it matches what I expect
2.) I continue reading the rest.
The first read works correctly and if I want to read the whole stream it is all there. However I only want to read the first character and evaluate it then continue in my LongReadStream method.
When I try to continue reading the stream there is no data to be read. I am guessing that the data is being lost during the first read but I'm not sure how to determine that. All this code works correctly when I use the normal NetworkStream.
Here is the server side code.
private void ProcessRequests()
{
// This method reads the first byte of data correctly and if I want to
// I can read the entire request here. However, I want to leave
// all that data until I want it below in my LongReadStream method.
if (QuickReadStream(_netStream, receiveBuffer, 1) != ENQUIRY)
{
// Invalid Request, close connection
clientIsFinished = true;
_client.Client.Disconnect(true);
_client.Close();
return;
}
while (!clientIsFinished) // Keep reading text until client sends END_TRANSMISSION
{
// Inside this method there is no data and the connection times out waiting for data
receiveText = LongReadStream(_netStream, _client);
// Continue talking with Client...
}
_client.Client.Shutdown(SocketShutdown.Both);
_client.Client.Disconnect(true);
_client.Close();
}
private string LongReadStream(NetworkStream stream, TcpClient c)
{
bool foundEOT = false;
StringBuilder sbFullText = new StringBuilder();
int readLength, totalBytesRead = 0;
string currentReadText;
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE * 100;
byte[] bigReadBuffer = new byte[c.ReceiveBufferSize];
while (!foundEOT)
{
using (var decompressStream = new DeflateStream(stream, CompressionMode.Decompress, true))
{
//using (StreamReader sr = new StreamReader(decompressStream))
//{
//currentReadText = sr.ReadToEnd();
//}
readLength = decompressStream.Read(bigReadBuffer, 0, c.ReceiveBufferSize);
currentReadText = Encoding.UTF8.GetString(bigReadBuffer, 0, readLength);
totalBytesRead += readLength;
}
sbFullText.Append(currentReadText);
if (currentReadText.EndsWith(END_OF_TEXT))
{
foundEOT = true;
sbFullText.Length = sbFullText.Length - 1;
}
else
{
sbFullText.Append(currentReadText);
}
// Validate data code removed for simplicity
}
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE;
c.ReceiveTimeout = timeOutMilliseconds;
return sbFullText.ToString();
}
private string QuickReadStream(NetworkStream stream, byte[] receiveBuffer, int receiveBufferSize)
{
using (DeflateStream zippy = new DeflateStream(stream, CompressionMode.Decompress, true))
{
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
var returnValue = Encoding.UTF8.GetString(receiveBuffer, 0, bytesIn);
return returnValue;
}
}
EDIT
NetworkStream has an underlying Socket property which has an Available property. MSDN says this about the available property.
Gets the amount of data that has been received from the network and is
available to be read.
Before the call below Available is 77. After reading 1 byte the value is 0.
//receiveBufferSize = 1
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
There doesn't seem to be any documentation about DeflateStream consuming the whole underlying stream and I don't know why it would do such a thing when there are explicit calls to be made to read specific numbers of bytes.
Does anyone know why this happens or if there is a way to preserve the underlying data for a future read? Based on this 'feature' and a previous article that I read stating a DeflateStream must be closed to finish sending (flush won't work) it seems DeflateStreams may be limited in their use for networking especially if one wishes to counter DOS attacks by testing incoming data before accepting a full stream.
The basic flaw I can think of looking at your code is a possible misunderstanding of how network stream and compression works.
I think your code might work, if you kept working with one DeflateStream. However, you use one in your quick read and then you create another one.
I will try to explain my reasoning on an example. Assume you have 8 bytes of original data to be sent over the network in a compressed way. Now let's assume for sake of an argument, that each and every byte (8 bits) of original data will be compressed to 6 bits in compressed form. Now let's see what your code does to this.
From the network stream, you can't read less than 1 byte. You can't take 1 bit only. You take 1 byte, 2 bytes, or any number of bytes, but not bits.
But if you want to receive just 1 byte of the original data, you need to read first whole byte of compressed data. However, there is only 6 bits of compressed data that represent the first byte of uncompressed data. The last 2 bits of the first byte are there for the second byte of original data.
Now if you cut the stream there, what is left is 5 bytes in the network stream that do not make any sense and can't be uncompressed.
The deflate algorithm is more complex than that and thus it makes perfect sense if it does not allow you to stop reading from the NetworkStream at one point and continue with new DeflateStream from the middle. There is a context of the decompression that must be present in order to decompress the data to their original form. Once you dispose the first DeflateStream in your quick read, this context is gone, you can't continue.
So, to resolve your issue, try to create only one DeflateStream and pass it to your functions, then dispose it.
This is broken in many ways.
You are assuming that a read call will read the exact number of bytes you want. It might read everything in one byte chunks though.
DeflateStream has an internal buffer. It can't be any other way: Input bytes do not correspond 1:1 to output bytes. There must be some internal buffering. You must use one such stream.
Same issue with UTF-8: UTF-8 encoded strings cannot be split at byte boundaries. Sometimes, your Unicode data will be garbled.
Don't touch ReceiveBufferSize, it does not help in any way.
You cannot reliably flush a deflate stream, I think, because the output might be at a partial byte position. You probably should devise a message framing format in which you prepend the compressed length as an uncompressed integer. Then, send the compressed deflate stream after the length. This is decodable in a reliable way.
Fixing these issues is not easy.
Since you seem to control client and server you should discard all of this and not devise your own network protocol. Use a higher-level mechanism such as web services, HTTP, protobuf. Anything is better than what you have there.
Basically there are a few things wrong with the code I posted above. First is that when I read data I'm not doing anything to make sure the data is ALL being read in. As per microsoft documentation
The Read operation reads as much data as is available, up to the
number of bytes specified by the size parameter.
In my case I was not making sure my reads would get all the data I expected.
This can be accomplished simply with this code.
byte[] data= new byte[packageSize];
bytesRead = _netStream.Read(data, 0, packageSize);
while (bytesRead < packageSize)
bytesRead += _netStream.Read(data, bytesRead, packageSize - bytesRead);
On top of this problem I had a fundamental issue with using DeflateStream - namely I should not use DeflateStream to write to the underlying NetworkStream. The correct approach is to first use the DeflateStream to compress data into a ByteArray, then send that ByteArray using the NetworkStream directly.
Using this approach helped to correctly compress data over the network and property read the data on the other end.
You may point out that I must know the size of the data, and that is true. Every call has a 8 byte 'header' that includes the size of the compressed data and the size of the data when it is uncompressed. Although I think the second was utimately not needed.
The code for this is here. Note the variable compressedSize serves 2 purposes.
int packageSize = streamIn.Read(sizeOfDataInBytes, 0, 4);
while (packageSize!= 4)
{
packageSize+= streamIn.Read(sizeOfDataInBytes, packageSize, 4 - packageSize);
}
packageSize= BitConverter.ToInt32(sizeOfDataInBytes, 0);
With this information I can correctly use the code I showed you first to get the contents fully.
Once I have the full compressed byte array I can get the incoming data like so:
var output = new MemoryStream();
using (var stream = new MemoryStream(bufferIn))
{
using (var decompress = new DeflateStream(stream, CompressionMode.Decompress))
{
decompress.CopyTo(output);;
}
}
output.Position = 0;
var unCompressedArray = output.ToArray();
output.Close();
output.Dispose();
return Encoding.UTF8.GetString(unCompressedArray);
Edit: SOLVED! Please see my answer down below for details.
I was unable to find an answer to the original question but I found an alternate solution
This question may be asked somewhere else but I have been searching for days and can't find anything that helps.
Question: I need to convert "Stream" to "image(bgr, byte)" in one go, Is there a way/command to convert directly from System.Drawing.Image.FromStream to Emgu.CV.Image(Bgr, Byte) without converting from stream to image to bitmap to image(bgr, byte)?
Information: I'm coding in c# in Visual Studio 2010 as part of my dissertation project.
I am taking a image stream from an IP camera on a network and applying many algorithms to detect faces/extract facial features and recognise an individuals face. On my laptops local camera I can achieve FPS of about 25~ (give or take) including algorithms because I don't have to convert the image. For an IP camera stream I need to convert it many times to achieve the desired format and the result is around 5-8fps.
(I know my current method is extremely inefficient which is why I'm here, I'm actually converting an image 5 times total (even gray scaling too), actually only using half of my processors memory (i7, 8gb RAM)). It does have to be image(bgr, byte) as that is the only format the algorithms will function with.
The code I'm using to get the image:
//headers
using System.IO
using System.Threading;
using System.Net;
//request a connection
req = (HttpWebRequest)HttpWebRequest.Create(cameraUrl);
//gives chance for timeout for errors to occur or loss of connection
req.AllowWriteStreamBuffering = true;
req.Timeout = 20000;
//retrieve response (if successfull)
res = req.GetResponse();
//image returned
stream = res.GetResponseStream();
I have alot of stuff in the background managing connections, data, security etc which I have shortened to the above code.
My current code to covert the image to the desired output:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for other uses
gray = currentFrame.Convert<Gray, Byte>();
I understand there is a method to save an image locally temporarily but I would need to avoid that for security purposes. I'm looking more for a direct conversion to help save processing power.
Am I overlooking something? All help is appreciated.
Thanks for reading. (I will update this if anyone requests any more details)
-Dave
You've got a couple potential bottlenecks, not the least of which is that you're probably jpeg decoding the stream into an image and then converting that into a bitmap and then into an openCV image.
One way around this is to bypass the .NET imaging entirely. This would involve trying to use libjpeg directly. There's a free port of it here in C#, and IIRC you can hook into it to get called on a per-scanline basis to fill up a buffer.
The downside is that you're decoding JPEG data in managed code which will run at least 1.5X slower than equivalent the C, although quite frankly I would expect network speed to dwarf this immensely.
OpenCV should be able to read jpeg images directly (wanna guess what they use under the hood? Survey says: libjpeg), which means that you can buffer up the entire stream and hand it to OpenCV and bypass the .NET layer entirely.
I believe I found the answer to my problem. I have dabbled using Vano Maisuradze's idea of processing in memory which improved the fps a tiny margin (not immediately noticable without testing). And also thanks to Plinths answer I have a understanding of Multi-Threading and I can optimise this as I progress as I can split the algorithms up to work in parallel.
What I think is my cause is the networking speed! not the actual algorithm delay. As pointed out by Vano with the stopwatch to find the speed the algorithms didn't actually consume that much. So with and without the algorithms the speed is about the same if I optimise using threading so the next frame is being collected as the previous one finishes processing.
I did some testing on some physical Cisco routers and got the same result if a bit slower messing round with clock speeds and bandwidths which was noticeable. So I need to find out a way to retrieve frames over networks faster, Very big thank you to everyone who answered who helped me understand better!
Conclusion:
Multi-threading to optimise
Processing in memory instead of converting constantly
Better networking solutions (Higher bandwidth and speeds)
Edit: The code to retrieve an image and process in memory for anyone who finds this looking for help
public void getFrames(object sender, EventArgs e)
{//Gets a frame from the IP cam
//Replace "IPADDRESS", "USERNAME", "PASSWORD"
//with respective data for your camera
string sourceURL = "http://IPADDRESS/snapshot.cgi?user=USERNAME&pwd=PASSWORD";
//used to store the image retrieved in memory
byte[] buffer = new byte[640 * 480];
int read, total = 0;
//Send a request to the peripheral via HTTP
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
WebResponse resp = req.GetResponse();
//Get the image capture after recieving a request
//Note: just a screenshot not a steady stream
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}//While End
//Convert memory (byte) to bitmap and store in a picturebox
pictureBox1.Image = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
}//getFrames End
private void button1_Click(object sender, EventArgs e)
{//Trigger an event to start running the function when possible
Application.Idle += new EventHandler(getFrames);
}//Button1_Click End
You can save several image in memory (buffer) and then start processing from buffer.
Something like this:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for later use
gray = currentFrame.Convert<Gray, Byte>();
SaveToBuffer(gray);
Queue<Emgu.CV.Image<Gray, Byte>> buffer = new Queue<Emgu.CV.Image<Gray, Byte>>();
bool canProcess = false;
// ...
private void SaveToBuffer(Emgu.CV.Image<Gray, Byte> img)
{
buffer.Enqueue(img);
canProcess = buffer.Count > 100;
}
private void Process()
{
if(canProcess)
{
buffer.Dequeue();
// Processing logic goes here...
}
else
{
// Buffer is still loading...
}
}
But note that you will need enough RAM to store images in memory and also you should adjust buffer size to meat your requirements.
Im writing an Server/Client Application which works with SSL(over SSLStream), which has to do many things(not only file receiving/sending). Currently, It works so: Theres only one connection. I always send the data from the client/server using SSLStream.WriteLine() and receive it using SSLStream.ReadLine(), because I can send all informations over one connection and I can send from all threads without destroying the data.
Now I wanted to implement the file sending and receiving. Like other things in my client/server apps, every message has a prefix (like cl_files or sth) and a base64 encoded content part(prefix and content are seperated by |). I implemented the file sharing like that: The uploader send to the receiver a message about the total file size and after that the uploader sends the base64 encoded parts of the file over the prefix r.
My problem is that the file sharing is really slow. I got around 20KB/s from localhost to localhost. I have also another problem. If I increase the size of the base64 encoded parts of the file(which makes file sharing faster), the prefix r doesnt go out to the receiver anymore(so the datas couldnt be identified).
How can I make it faster?
Any help will be greatly appreciated.
My(propably bad) code is for the client:
//its running inside a Thread
FileInfo x = new FileInfo(ThreadInfos.Path);
long size = x.Length; //gets total size
long cursize = 0;
FileStream fs = new FileStream(ThreadInfos.Path, FileMode.Open);
Int16 readblocks = default(Int16);
while (cursize < size) {
byte[] buffer = new byte[4096];
readblocks = fs.Read(buffer, 0, 4096);
ServerConnector.send("r", getBase64FromBytes(buffer));//It sends the encoded Data with the prefix r over SSLStream.WriteLine
cursize = cursize + Convert.ToInt64(readblocks);
ThreadInfos.wait.setvalue((csize / size) * 100);//outputs value to the gui
}
fs.Close();
For the Server:
case "r"://switch case for prefixes
if (isreceiving)
{
byte[] buffer = getBytesFromBase64(splited[1]);//splited ist the received Line over ReadLine splitted by the seperator "|"
rsize = rsize + buffer.LongLength;
writer.Write(buffer, 0, buffer.Length);//it writes the decoded data into the file
if (rsize == rtotalsize)//checks if file is completed
{
writer.Close();
}
}
break;
Your problem stems from the fact that you are performing what is essentially a binary operation through a text protocol and you are exacerbating that problem by doing it over an encrypted channel. I'm not going to re-invent this for you, but here are some options...
Consider converting to an HTTPS client/server model instead of reinventing the wheel. This will give you a well-defined model for PUT/GET operations on files.
If you can not (or will not) convert to HTTPS, consider other client/server libraries that provide a secure transport and well-defined protocol for binary data. For example, I often use protobuf-csharp-port and protobuf-csharp-rpc to provide a secure protocol and transport within our datacenter or local network.
If you are stuck with your transport being a raw SslStream, try using a well-defined and proven binary serialization framework like protobuf-csharp-port or protobuf-net to define your protocol.
Lastly, if you must continue with the framework you have, try some http-like tricks. Write a name/value pair as text that defines the raw-binary content that follows.
First of all base64 over ssl will be slow anyway, ssl itself is slower then raw transport. File transfers are not done over base64 now days, http protocol is much more stable than anything else and most libraries on all platforms are very well stable. Base64 takes more size then actual data, plus the time to encode.
Also, your following line may be a problem.
ThreadInfos.wait.setvalue((csize / size) * 100);//outputs value to the gui
If your this line is blocking, then this will slow down for every 4kb. Updating for every 4kb is also not right, unless a progress value from previous value differs by significant amount, there is no need to update ui for it.
I'd give a try of gzip compress before/after the network. From my experience, it helps. I'd say some code like this could help :
using(GZipStream stream = new GZipStream(sslStream, CompressionMode.Compress))
{
stream.Write(...);
stream.Flush();
stream.Close();
}
Warning : It may interfer with SSL if the Flush is not done. and it will need some tests... and I didn't try to compile the code.
I think Akash Kava is right.
while (cursize < size) {
DateTime start = DateTime.Now;
byte[] buffer = new byte[4096];
readblocks = fs.Read(buffer, 0, 4096);
ServerConnector.send("r", getBase64FromBytes(buffer));
DateTime end = DateTime.Now;
Console.Writline((end-start).TotalSeconds);
cursize = cursize + Convert.ToInt64(readblocks);
ThreadInfos.wait.setvalue((csize / size) * 100);
end = DateTime.Now;
Console.Writline((end-start).TotalSeconds);
}
By doing this you can find out where is the bottle neck.
Also the way you sending data packets to server is not robust.
Is it possible to paste your implementation of
ThreadInfos.wait.setvalue((csize / size) * 100);