C# WebClient upload speeds - c#

I was wondering if it is possible to increase buffer size on WebClient Async data upload, because currently it pushes ~320kB/s maximum.
My current code:
using (WebClient Client = new WebClient())
{
byte[] Buffer = File.ReadAllBytes(this.WorkItem.FileLocation);
Client.UploadProgressChanged += new UploadProgressChangedEventHandler(Client_UploadProgressChanged);
Client.UploadDataCompleted += new UploadDataCompletedEventHandler(Client_UploadDataCompleted);
Client.UploadDataAsync(new Uri("-snip-"), Buffer);
}
Edit
Connection is not the limiting factor. ( its 300mbit connection, web-servers push content at ~30-40mB/s mark )

If you want more control over how the data is buffered you need to use the HttpWebRequest class. With this class you can choose your read buffer reading from a FileStream and then how much you are writing to the network stream. Doing 4MB reads and 32KB writes was optimal for maxing out my network throughput (although you will have to do your own benchmarks to see which buffers work best in your scenario).

Related

How to use ReadAsync() on a network stream in combination with processing?

I am trying to download from a server a large file, about 500 Mb, but instead of saving that file to the filesystem, I am trying to process it "on the fly", retrieving some chunks of data, analysing them and, when there is enough information, saving them to the database. Here is what I am trying to do:
byte[] buffer = new byte[64 * 1024];
using (HttpResponseMessage response = await httpClient.GetAsync(Server + file, HttpCompletionOption.ResponseHeadersRead))
using (Stream streamToReadFrom = await response.Content.ReadAsStreamAsync())
{
int wereRead;
do
{
wereRead = await streamToReadFrom.ReadAsync(buffer, 0, buffer.Length);
// Do the processing and saving
} while (wereRead == buffer.Length);
I tried to use the buffer of 64k as the chunks of data I need to process are about that size. My reasoning was that since I am 'awaiting' on ReadAsync, the method call will not return until the buffer is full but that is not the case. The method was returning with only 7k to 14k bytes read. I tried to use a much smaller buffer, but anyway the speed of my processing is much higher than the speed of the download, so with a 4k buffer I might have a full buffer on the first iteration but only, say, 3k on the second iteration.
Is there an approach that would be recommended in my situation? Basically, I want ReadAsync to only return once the buffer is full, or once the end of the stream is reached.

Bandwidth throttling while copying files between computers

I've been trying to make a program to transfer a file with bandwidth throttling (after zipping it) to another computer on the same network.
I need to get its bandwidth throttled in order to avoid saturation (Kind of the way Robocopy does).
Recently, I found the ThrottledStream class, but It doesn't seem to be working, since I can send a 9MB with a limitation of 1 byte throttling and it still arrives almost instantly, so I need to know if there's some misapplication of the class.
Here's the code:
using (FileStream originStream = inFile.OpenRead())
using (MemoryStream compressedFile = new MemoryStream())
using (GZipStream zippingStream = new GZipStream(compressedFile, CompressionMode.Compress))
{
originStream.CopyTo(zippingStream);
using (FileStream finalDestination = File.Create(destination.FullName + "\\" + inFile.Name + ".gz"))
{
ThrottledStream destinationStream = new ThrottledStream(finalDestination, bpsLimit);
byte[] buffer = new byte[bufferSize];
int readCount = compressedFile.Read(buffer,0,bufferSize);
while(readCount > 0)
{
destinationStream.Write(buffer, 0, bufferSize);
readCount = compressedFile.Read(buffer, 0, bufferSize);
}
}
}
Any help would be appreciated.
The ThrottledStream class you linked to uses a delay calculation to determine how long to wait before perform the current write. This delay is based on the amount of data sent before the current write, and how much time has elapsed. Once the delay period has passed it writes the entire buffer in a single chunk.
The problem with this is that it doesn't do any checks on the size of the buffer being written in a particular write operation. If you ask it to limit throughput to 1 byte per second, then call the Write method with a 20MB buffer, it will write the entire 20MB immediately. If you then try to write another block of data that is 2 bytes long, it will wait for a very long time (20*2^20 seconds) before writing those two bytes.
In order to get the ThrottledStream class to work more smoothly, you have to call Write with very small blocks of data. Each block will still be written immediately, but the delays between the write operations will be smaller and the throughput will be much more even.
In your code you use a variable named bufferSize to determine the number of bytes to process per read/write in the internal loop. Try setting bufferSize to 256, which will result in many more reads and writes, but will give the ThrottledStream a chance to actually introduce some delays.
If you set bufferSize to be the same as bpsLimit you should see a single write operation complete every second. The smaller you set bufferSize the more write operations you'll get per second, the smoother the bandwidth throttling will work.
Normally we like to process as much of a buffer as possible in each operation to decrease the overheads, but in this case you're explicitly trying to add overheads to slow things down :)

Is it a good idea to write a full message to a NetworkStream, or to write sections of each message?

I was wondering to myself whether or not writing a full message to a NetworkStream would be better than writing each section of a message in multiple Write calls. For example, would writing the message in full like such...
NetworkStream ns = tcpClient.GetStream();
byte[] message = Encoding.ASCII.GetBytes("This is a message.");
ns.Write(message, 0, message.Length);
... be a better idea than writing like this...
NetworkStream ns = tcpClient.GetStream();
byte[] message1 = Encoding.ASCII.GetBytes("This ");
byte[] message2 = Encoding.ASCII.GetBytes("is ");
byte[] message3 = Encoding.ASCII.GetBytes("a ");
byte[] message4 = Encoding.ASCII.GetBytes("message.");
ns.Write(message1, 0, message1.Length);
ns.Write(message2, 0, message2.Length);
ns.Write(message3, 0, message3.Length);
ns.Write(message4, 0, message4.Length);
Is there also much difference in program performance or networking performance for each method?
This gets tricky, and depends on how the socket is configured. What you Write does not not map directly to what is received:
the NIC may have to split it into packets at transmission
the socket/NIC may be configured to combine packets for tramsmission (reducing the actual network packets, but making it hard to be sure that you've sent what you think you have - it may be buffered locally)
Note in particular that NetworkStream.Flush() doesn't do anything, so you can't use that to push any last few bytes
A good compromise is to wrap the NetworkStream in a BufferedStream, and configure the Socket with NoDelay = true (disables local output buffering). The BufferedStream allows you to keep writing in any-size chunks (including individual bytes), without causing huge packet fragmentation; the BufferedStream will flush itself when it approaches a set size. However, and importantly, you now have access to the BufferedStream's Flush() method which will empty the buffered data to the network; useful if having a complex back-fore conversation and need to know you've sent the end of your message.
The risk otherwise is that the client waits forever for a response (without realising it still has 3 bytes buffered locally, so hasn't sent a full request), and the server doesn't respond because it is still waiting for the last 3 bytes of a request. Deadlock.
In terms of networking it will be the same. But on the client you don't need to have the entire message loaded in memory if you use the second approach. This could be useful when dealing with really big volumes of data. For example let's suppose that you want to send huge files. You could read those files in chunks so that you never need to load the entire file contents in memory and send it in chunks to the network. Only one chunk will ever be loaded at a time in-memory on the client.
But obviously if you already have the entire message in memory don't bother and write it in one shot to the network socket.

how to write the stream to memory stream?

public void doprocess(TcpClient client)
{
MemoryStream ms = new MemoryStream();
Stream clStream = client.GetStream();
byte[] buffer_1 = new byte[8192];
int count = clStream.Read(buffer_1, 0, buffer_1.Length);
while (count > 0)
{
ms.Write(buffer_1, 0, count);
//the below line doesn't gives response and code hangs here
count = clStream.Read(buffer_1, 0, buffer_1.Length);
}
}
Is there any other way to write one stream to another? I want to use this Stream twice, which is why I need to write it to the MemoryStream.
In .NET 4 the copying part is really easy:
MemoryStream ms = new MemoryStream();
client.GetStream().CopyTo(ms);
If you're not using .NET 4, then code similar to what you've already got is basically the same thing.
However, note that this (and any other attempt) will only work if the network stream has been closed - otherwise the Read call will block waiting for more data. I suspect that's what's going wrong for you - and it's a fundamental problem.
If your network stream isn't closed, then it's unclear how you'd really want it to behave - should the two readers of the "split" stream basically read any buffered data, but then block until there's new data otherwise? The buffered data could be removed when it's been read from both streams, of course. If that is what you're after, I don't think there's anything in .NET to help you particularly - and it's probably pretty tricky.
If you could give us more context of what you're trying to do (and what the TcpClient is connecting to) that would really help.
The reason why your code hangs on clStream.Read is because you are tying to read 8192 bytes from the socket but on the other side no-one is writing that many bytes. So the client just sits there and waits for the other side to send the required number of bytes. Depending on the protocol you are trying to implement over TCP there must be some indincation from the server how much data it intends to send so that the client knows in advance and only tries to read that many bytes.
For example in the HTTP protocol the server sends in the headers the Content-Length header to indicate to the clients how much data is going to be sent in the body.

How to make my application copy file faster

I have create windows application that routine download file from load balance server, currently the speed is about 30MB/second. However I try to use FastCopy or TeraCopy it can copy at about 100MB/second. I want to know how to improve my copy speed to make it can copy file faster than currently.
One common mistake when using streams is to copy a byte at a time, or to use a small buffer. Most of the time it takes to write data to disk is spent seeking, so using a larger buffer will reduce your average seek time per byte.
Operating systems write files to disk in clusters. This means that when you write a single byte to disk Windows will actually write a block between 512 bytes and 64 kb in size. You can get much better disk performance by using a buffer that is an integer multiple of 64kb.
Additionally, you can get a boost from using a buffer that is a multiple of your CPUs underlying memory page size. For x86/x64 machines this can be set to either 4kb or 4mb.
So you want to use an integer multiple of 4mb.
Additionally if you use asynchronous IO you can fully take advantage of the large buffer size.
class Downloader
{
const int size = 4096 * 1024;
ManualResetEvent done = new ManualResetEvent(false);
Socket socket;
Stream stream;
void InternalWrite(IAsyncResult ar)
{
var read = socket.EndReceive(ar);
if (read == size)
InternalRead();
stream.Write((byte[])ar.AsyncState, 0, read);
if (read != size)
done.Set();
}
void InternalRead()
{
var buffer = new byte[size];
socket.BeginReceive(buffer, 0, size, System.Net.Sockets.SocketFlags.None, InternalWrite, buffer);
}
public bool Save(Socket socket, Stream stream)
{
this.socket = socket;
this.stream = stream;
InternalRead();
return done.WaitOne();
}
}
bool Save(System.Net.Sockets.Socket socket, string filename)
{
using (var stream = File.OpenWrite(filename))
{
var downloader = new Downloader();
return downloader.Save(socket, stream);
}
}
Possibly your application can do multi-threading to get the file using multiple threads, however the bandwidth is limited to the speed of the devices that transfer the content
Simplest way is to open the file in raw/binary mode (thats C speak not sure waht the C# equivalent is) and read and write very large blocks (several MB) at a time.
The trick TeraCopy uses is to make the reading and writing asynchronous. This means that a block of data can be written while another one is being read.
You have to fiddle around with the number of blocks and the size of those blocks to get the optimum for your situation. I used this method using C++ and for us the optimum was using four blocks of 256KB when copying from a network share to a local disk.
Regards,
Sebastiaan
If you run Process Monitor you can see the block sizes that Windows Explorer or TeraCopy are using.
In Vista the default block size for the local network is afair 2 MB, which makes copying files over a huge pipe a lot faster.
Why reinvent the wheel?
If your situation permits, you are probably better off shelling out to one of the existing "fast" copy utilities than trying to write one yourself. There are numerous non-obvious edge-cases which need to be handled, and getting consistently good perf requires lots of trial-end-error experimentation.

Categories