I'm trying to make an auction sniper for a site. To place a bid you need to send 4 parameters(and cookies of course) to /auction/place_bid. I need to use sockets, not HttpWebRequest. Here's the code:
string request1 = "POST /auction/place_bid HTTP/1.1\r\nHost: *host here*\r\nConnection: Keep-Alive\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)\r\nAccept: /*\r\nContent-Type: application/x-www-form-urlencoded; charset=UTF-8\r\nX-Requested-With: XMLHttpRequest\r\n" + cookies +"\r\n";
string request3 = "token=" + token + "&aid=" + aid + "&bidReq=" + ptzReq + "&recaptcha_challenge_field=" + rcf + "&recaptcha_response_field=" + rrf+"\r\n\r\n";
string request2 = "Content-Length: " + (Encoding.UTF8.GetByteCount(request1+request3)+23).ToString() + "\r\n";
byte[] dataSent = Encoding.UTF8.GetBytes(request1+request2+request3);
byte[] dataReceived = new byte[10000];
Socket socket = ConnectSocket(server, 80);
if (socket == null)
{
return null;
}
socket.Send(dataSent, dataSent.Length, 0);
int bytes = 0;
string page = "";
do
{
bytes = socket.Receive(dataReceived, dataReceived.Length, 0);
page = page + Encoding.ASCII.GetString(dataReceived, 0, bytes);
}
while (bytes > 0);
return page;
When I'm trying to receive the webpage Visual Studio says that "Operation on an unblocked socket cannot be completed immediatly", when I add
socket.Blocking = true;
My application stops responsing and after ~1 minute it returns page, but it's empty! When I'm trying to make a GET request it works perfect. I hope you will help me. By the way, this is the first time when I use sockets so my code is pretty bad, sorry about that.
*I'm using a ConnectSocket class, which was given as an example at msdn (The link leads to Russian MSDN, sorry, I didn't find the same article in English, but you'll understand the code anyway)
The Content-Length header should indicate the size of the content. You're setting it to the total size of your headers and content.
Encoding.UTF8.GetByteCount(request1+request3)+23).ToString()
Since the content part of your message is just request3, the server is patiently waiting for ByteCount(request1)+23 more bytes of content which you never send.
Try this instead:
"Content-Length: " + Encoding.UTF8.GetByteCount(request3).ToString() + "\r\n"
Another issue looks like your loop:
do
{
bytes = socket.Receive(dataReceived, dataReceived.Length, 0);
page = page + Encoding.ASCII.GetString(dataReceived, 0, bytes);
}
while (bytes > 0);
Since non-blocking socket operations always return immediately whether or not they've completed yet, you need a loop that keeps calling Receive() until the operation has actually completed. Here, if the call to Receive() returns 0 (which it almost certainly will the first time) you exit the loop.
You should at least change it to while (bytes <= 0) which would get you at least some data (probably just the first packet's worth or so). Ideally, you should keep calling Receive() until you see the Content-Length header in the reply, then continue calling Receive() until the end of the headers, then read Content-Length more bytes.
Since you're using sockets, you really have to re-implement the HTTP protocol.
As people already has pointed out: HttpWebRequest is not the cause of you performance issues. Switching to a socket implementation will not affect anything.
The fact is that the HttpWebRequest can do zillions of stupid things if it want to, and it will still be faster than the time it takes to get stuff from the webserver.
Switching to a socket implementation might speed things up if you have good knowledge when it comes to sockets AND the http protocol. You clearly do not have that, so I would recommend that you go back to HttpWebRequest again.
You might want to use WebClient if you are going to fetch lots of pages from the same webserver since it will keep the connection alive.
Update
I don't need a lot of connections, I need to make 1 request a time, and it should be as fast as it possible
Well. Then it doesn't really matter which implementation you use. The network latency will ALWAYS be a lot larger than the actual HTTP client implementation. Building a HTTP request doesn't take very much resources, parsing a response doesn't do that either.
Related
I have WPF app that processes a lot of urls (thousands), each it sends off to it's own thread, does some processing and stores a result in the database.
The urls can be anything, but some seem to be massively big pages, this seems to shoot the memory usage up a lot and make performance really bad. I set a timeout on the web request, so if it took longer than say 20 seconds it doesn't bother with that url, but it seems to not make much difference.
Here's the code section:
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(urlAddress.Address);
req.Timeout = 20000;
req.ReadWriteTimeout = 20000;
req.Method = "GET";
req.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
using (StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream()))
{
pageSource = reader.ReadToEnd();
req = null;
}
It also seems to stall/ramp up memory on reader.ReadToEnd();
I would have thought having a cut off of 20 seconds would help, is there a better method? I assume there's not much advantage to using asynch web method as each url download is on its own thread anyway..
Thanks
In general, it's recommended that you use asynchronous HttpWebRequests instead of creating your own threads. The article I've linked above also includes some benchmarking results.
I don't know what you're doing with the page source after you read the stream to end, but using string can be an issue:
System.String type is used in any .NET application. We have strings
as: names, addresses, descriptions, error messages, warnings or even
application settings. Each application has to create, compare or
format string data. Considering the immutability and the fact that any
object can be converted to a string, all the available memory can be
swallowed by a huge amount of unwanted string duplicates or unclaimed
string objects.
Some other suggestions:
Do you have any firewall restrictions? I've seen a lot of issues at work where the firewall enables rate limiting and fetching pages grinds down to a halt (happens to me all the time)!
I presume that you're going to use the string to parse HTML, so I would recommend that you initialize your parser with the Stream instead of passing in a string containing the page source (if that's an option).
If you're storing the page source in the database, then there isn't much you can do.
Try to eliminate the reading of the page source as a potential contributor to the memory/performance problem by commenting it out.
Use a streaming HTML parser such as Majestic 12- avoids the need to load the entire page source into memory (again, if you need to parse)!
Limit the size of the pages you're going to download, say, only download 150KB. The average page size is about 100KB-130KB
Additionally, can you tell us what's your initial rate of fetching pages and what does it go down to? Are you seeing any errors/exceptions from the web request as you're fetching pages?
Update
In the comment section I noticed that you're creating thousands of threads and I would say that you don't need to do that. Start with a small number of threads and keep increasing them until you peek the performance on your system. Once you start adding threads and the performance looks like it's tapered off, then sop adding threads. I can't imagine that you will need more than 128 threads (even that seems high). Create a fixed number of threads, e.g. 64, let each thread take a URL from your queue, fetch the page, process it and then go back to getting pages from the queue again.
You could enumerate with a buffer instead of calling ReadToEnd, and if it is taking too long, then you could log and abandon - something like:
static void Main(string[] args)
{
Uri largeUri = new Uri("http://www.rfkbau.de/index.php?option=com_easybook&Itemid=22&startpage=7096");
DateTime start = DateTime.Now;
int timeoutSeconds = 10;
foreach (var s in ReadLargePage(largeUri))
{
if ((DateTime.Now - start).TotalSeconds > timeoutSeconds)
{
Console.WriteLine("Stopping - this is taking too long.");
break;
}
}
}
static IEnumerable<string> ReadLargePage(Uri uri)
{
int bufferSize = 8192;
int readCount;
Char[] readBuffer = new Char[bufferSize];
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (StreamReader stream = new StreamReader(response.GetResponseStream(), Encoding.UTF8))
{
readCount = stream.Read(readBuffer, 0, bufferSize);
while (readCount > 0)
{
yield return new string(readBuffer, 0, bufferSize);
readCount = stream.Read(readBuffer, 0, bufferSize);
}
}
}
Lirik has really good summary.
I would add that if I were implementing this, I would make a separate process that reads the pages. So, it would be a pipeline. First stage would download the URL and write it to a disk location. And then queue that file to the next stage. Next stage reads from the disk and does the parsing & DB updates. That way you will get max throughput on the download and parsing as well. You can also tune your threadpools so that you have more workers parsing, etc. This architecture also lends very well to distributed processing where you can have one machine downloading, and another host parsing/etc.
Another thing to note is that if you are hitting the same server from multiple threads (even if you are using Async) then you will hit yourself against the max outgoing connection limit. You can throttle yourself to stay below that, or increase the connection limit on the ServicePointManager class.
I am playing with RserveCLI project, which is a .net client communicating with the statistical environment R. The basic idea is sending data/commends between this .NET client and an R session via TCP protocol.
One bug that others and I found is that big data trunk, say over 10k bytes, cannot get transfer successfully. I found the but in the following code snippet:
// send the commend to R, then R will do some computation and get the data ready to send back
int toConsume = this.SubmitCommand(cmd, data);
var res = new List<object>();
while (toConsume > 0)
{
var dhbuf = new byte[4];
if (this.socket.Receive(dhbuf) != 4)
{
throw new WebException("Didn't receive a header.");
}
byte typ = dhbuf[0];
// ReSharper disable RedundantCast
int dlength = dhbuf[1] + (((int)dhbuf[2]) << 8) + (((int)dhbuf[3]) << 16);
// ReSharper restore RedundantCast
var dvbuf = new byte[dlength];
// BUG: I added this sleep line, without this line, bug occures
System.Threading.Thread.Sleep(500);
// this line cannot receive the whole data at once
var received = this.socket.Receive(dvbuf);
// so the exception throws
if (received != dvbuf.Length)
{
var tempR = this.socket.Receive(dvbuf);
throw new WebException("Expected " + dvbuf.Length + " bytes of data, but received " + received + ".");
}
The reason is that the .NET code runs too fast and the R side cannot send the data that fast. So the receive line after my inserted Thread.Sleep(500) does not get all the data. If I wait some time there, then it can get all the data. But I don't know how long.
I have some basic idea to deal with the problem, for example, continuously use this.socket.Receive() to get data, but if there is no data there .Receive will block there.
I have little experience in socket programming, so I am asking the best practice for this kind of problem. Thanks!
According to the docs:
If you are using a connection-oriented Socket, the Receive method will read as much data as is available, up to the size of the buffer.
So you are never guranteed to get all the data asked for in the receive call. You need to check how many bytes were actually returned by the Receive, then issue another receive call for the remaining bytes. Continue that loop until you get all the bytes you were looking for.
By definition, TCP is a streaming protocol, whereas UDP is message based. If the data you are trying to receive does not contain a byte count for the entire message, or some sort of end-of-message indicator, you will just have to loop on the socket.receive until some arbitrary timeout has expired. At that point, check the accumulated received data for completeness.
I am working on building a simple proxy which will log certain requests which are passed through it. The proxy does not need to interfere with the traffic being passed through it (at this point in the project) and so I am trying to do as little parsing of the raw request/response as possible durring the process (the request and response are pushed off to a queue to be logged outside of the proxy).
My sample works fine, except for a cannot reliably tell when the "response" is complete so I have connections left open for longer than needed. The relevant code is below:
var request = getRequest(url);
byte[] buffer;
int bytesRead = 1;
var dataSent = false;
var timeoutTicks = DateTime.Now.AddMinutes(1).Ticks;
Console.WriteLine(" Sending data to address: {0}", url);
Console.WriteLine(" Waiting for response from host...");
using (var outboundStream = request.GetStream()) {
while (request.Connected && (DateTime.Now.Ticks < timeoutTicks)) {
while (outboundStream.DataAvailable) {
dataSent = true;
buffer = new byte[OUTPUT_BUFFER_SIZE];
bytesRead = outboundStream.Read(buffer, 0, OUTPUT_BUFFER_SIZE);
if (bytesRead > 0) { _clientSocket.Send(buffer, bytesRead, SocketFlags.None); }
Console.WriteLine(" pushed {0} bytes to requesting host...", _backBuffer.Length);
}
if (request.Connected) { Thread.Sleep(0); }
}
}
Console.WriteLine(" Finished with response from host...");
Console.WriteLine(" Disconnecting socket");
_clientSocket.Shutdown(SocketShutdown.Both);
My question is whether there is an easy way to tell that the response is complete without parsing headers. Given that this response could be anything (encoded, encrypted, gzip'ed etc), I dont want to have to decode the actual response to get the length and determine if I can disconnect my socket.
As David pointed out, connections should remain open for a period of time. You should not close connections unless the client side does that (or if the keep alive interval expires).
Changing to HTTP/1.0 will not work since you are a server and it's the client that will specify HTTP/1.1 in the request. Sure, you can send a error message with HTTP/1.0 as version and hope that the client changes to 1.0, but it seems inefficient.
HTTP messages looks like this:
REQUEST LINE
HEADERS
(empty line)
BODY
The only way to know when a response is done is to search for the Content-Length header. Simply search for "Content-Length:" in the request buffer and extract everything to the linefeed. (But trim the found value before converting to int).
The other alternative is to use the parser in my webserver to get all headers. It should be quite easy to use just the parser and nothing more from the library.
Update: There is a better parser here: HttpParser.cs
If you make a HTTP/1.0 request instead of 1.1, the server should close the connection as soon as it's through since it doesn't need to keep the connection open for another request.
Other than that, you really need to parse the content length header in the response to get the best value.
Using blocking IO and multiple threads might be your answer. Specifically
using(var response = request.GetResponse())
using(var stream = response.GetResponseStream())
using(var reader = new StreamReader(stream)
data = reader.ReadToEnd()
This is for textual data, however binary handling is similar.
I'm using Socket class for my web client. I can't use HttpWebRequest since it doesn't support socks proxies. So I have to parse headers and handle chunked encoding by myself. The most difficult thing for me is to determine length of content so I have to read it byte-by-byte. First I have to use ReadByte() to find last header ("\r\n\r\n" combination), then check whether body has transfer-encoding or not. If it does I have to read chunk's size etc:
public void ParseHeaders(Stream stream)
{
while (true)
{
var lineBuffer = new List<byte>();
while (true)
{
int b = stream.ReadByte();
if (b == -1) return;
if (b == 10) break;
if (b != 13) lineBuffer.Add((byte)b);
}
string line = Encoding.ASCII.GetString(lineBuffer.ToArray());
if (line.Length == 0) break;
int pos = line.IndexOf(": ");
if (pos == -1) throw new VkException("Incorrect header format");
string key = line.Substring(0, pos);
string value = line.Substring(pos + 2);
Headers[key] = value;
}
}
But this approach has very poor performance. Can you suggest better solution? Maybe some open source examples or libraries that handle http request through sockets (not very big and complicated though, I'm a noob).
The best would be to post link to example that reads message body and correctly handles the cases when: content has chunked-encoding, is gzip- or deflate-encoded, Content-Length header is omitted (message ends when connection is closed). Something like source code of HttpWebRequest class.
Upd:
My new function looks like this:
int bytesRead = 0;
byte[] buffer = new byte[0x8000];
do
{
try
{
bytesRead = this.socket.Receive(buffer);
if (bytesRead <= 0) break;
else
{
this.m_responseData.Write(buffer, 0, bytesRead);
if (this.m_inHeaders == null) this.GetHeaders();
}
}
catch (Exception exception)
{
throw new Exception("Read response failed", exception);
}
}
while ((this.m_inHeaders == null) || !this.isResponseBodyComplete());
Where GetHeaders() and isResponseBodyComplete() use m_responseData (MemoryStream) with already received data.
I suggest that you don't implement this yourself - the HTTP 1.1 protocol is sufficiently complex to make this a project of several man-months.
The question is, is there a HTTP request protocol parser for .NET? This question has been asked on SO, and in the answers you'll see several suggestions, including source code for handling HTTP streams.
Converting Raw HTTP Request into HTTPWebRequest Object
EDIT: The rotor code is reasonably complex, and difficult to read/navigate as webpages. But still, the implementaiton effort to add SOCKS supports is much lower than implementing the entire HTTP protocol yourself. You will have something working within a few days at most that you can depend upon, that is based on a tried and tested implementation.
The request and response are read from/written to to a NetworkStream, m_Transport, in the Connection class. This is used in these methods:
internal int Read(byte[] buffer, int offset, int size)
//and
private static void ReadCallback(IAsyncResult asyncResult)
both in http://www.123aspx.com/Rotor/RotorSrc.aspx?rot=42903
The socket is created in
private void StartConnectionCallback(object state, bool wasSignalled)
So you could modify this method to create a Socket to your socks server, and do the necessary handshake to obtain the external connection. The rest of the code can remain the same.
I gammered this info in about 30 mins looking on the pages on the web. This should go much faster if you load these files into an IDE. It may seem like a burden to have to read through this code - after all, reading code is far harder than writing it, but you are making just small changes to an already established, working system.
To be sure the changes work in all cases, it will be wise to also test when the connection is broken, to ensure that the client reconnects using the same method , and so re-establishes the SOCKS connection and sends the SOCKS request.
If the problem is a bottleneck in terms of ReadByte being too slow, I suggest you wrap your input stream with a StreamBuffer. If the performance issue you claim to have is expensive becuase of small reads, then that will solve the problem for you.
Also, you don't need this:
string line = Encoding.ASCII.GetString(lineBuffer.ToArray());
HTTP by design requires that the header is only made up of ASCII characters. You don't really want to -- or need to -- turn it into actual .NET strings (which are Unicode).
If you wanna find the EOF of the HTTP header, you can do this for good performance.
int k = 0;
while (k != 0x0d0a0d0a)
{
var ch = stream.ReadByte();
k = (k << 8) | ch;
}
When the string \r\n\r\n is encoutered k will equal 0x0d0a0d0a
In most (should be all) http requests, there should be a header called content-length that will tell you how many bytes there are in the body of the request. Then it is simply a matter of allocating the appropriate amount of bytes and reading those bytes all at once.
While I would tend to agree with mdma about trying as hard as possible to avoid implementing your own HTTP stack, one trick you might consider is reading from the stream moderate-sized chunks. If you do a read and you give it a buffer that's larger than what's available, it should return you the number of bytes it did read. That should reduce the number of system calls and speed up your performance significantly. You'll still have to scan the buffers much like you do now, though.
Taking a look at another client's code is helpful (if not confusing):
http://src.chromium.org/viewvc/chrome/trunk/src/net/http/
I'm currently doing something like this too. I find the best way to increase the efficiency of the client is to use the asynchronous socket functions provided. They're quite low-level and get rid of busy waiting and dealing with threads yourself. All of these have Begin and End in their method names. But first, I would try it using blocking, just so you get the semantics of HTTP out of the way. Then you can work on efficiency. Remember: Premature optimization is evil- so get it working, then optimize all of the stuff!
Also: Some of your efficiency might be tied up in your use of ToArray(). It's known to be a bit expensive computationally. A better solution might be to store your intermediate results in a byte[] buffer and append them to a StringBuilder with the correct encoding.
For gzipped or deflated data, read in all of the data (keep in mind that you might not get all of the data the first time you ask. Keep track of how much data you have read in, and keep on appending to the same buffer). Then you can decode the data using GZipStream(..., CompressionMode.Decompress).
I would say that doing this is not as difficult as some might imply, you just have to be a bit adventurous!
All the answers here about extending Socket and/or TCPClient seem to miss something really obvious - that HttpWebRequest is also a class and can therefore be extended.
You don't need to write your own HTTP/socket class. You simply need to extend HttpWebRequest with a custom connection method. After connecting all data is standard HTTP and can be handled as normal by the base class.
public class SocksHttpWebRequest : HttpWebRequest
public static Create( string url, string proxy_url ) {
... setup socks connection ...
// call base HttpWebRequest class Create() with proxy url
base.Create(proxy_url);
}
The SOCKS handshake is not particularly complex so if you have a basic understanding of programming sockets it shouldn't take very long to implement the connection. After that HttpWebRequest can do the HTTP heavy lifting.
Why don't you read until 2 newlines and then just grab from the string? Performance might be worse but it still should be reasonable:
Dim Headers As String = GetHeadersFromRawRequest(ResponseBinary)
If Headers.IndexOf("Content-Encoding: gzip") > 0 Then
Dim GzSream As New GZipStream(New MemoryStream(ResponseBinary, Headers.Length + (vbNewLine & vbNewLine).Length, ReadByteSize - Headers.Length), CompressionMode.Decompress)
ClearTextHtml = New StreamReader(GzSream).ReadToEnd()
End If
Private Function GetHeadersFromRawRequest(ByVal request() As Byte) As String
Dim Req As String = Text.Encoding.ASCII.GetString(request)
Dim ContentPos As Integer = Req.IndexOf(vbNewLine & vbNewLine)
If ContentPos = -1 Then Return String.Empty
Return Req.Substring(0, ContentPos)
End Function
You may want to look at the TcpClient class in System.Net, it's a wrapper for a Socket that simplifies the basic operations.
From there you're going to have to read up on the HTTP protocol. Also be prepared to do some zip operations. Http 1.1 supports GZip of it's content and partial blocks. You're going to have to learn quite a bit to parse them out by hand.
Basic Http 1.0 is simple, the protocol is well documented online, our friendly neighborhood Google can help you with that one.
I would create a SOCKS proxy that can tunnel HTTP and then have it accept the requests from HttpWebRequest and forward them. I think that would be far easier than recreating everything that HttpWebRequest does. You could start with Privoxy, or just roll your own. The protocol is simple and documented here:
http://en.wikipedia.org/wiki/SOCKS
And on the RFC's that they link to.
You mentioned that you have to have many different proxies -- you could set up a local port for each one.
I am batch uploading products to a database.
I am download the image urls to the site to be used for the products.
The code I written works fine for the first 25 iterations (always that number for some reason), but then throws me a System.Net.WebException "The operation has timed out".
if (!File.Exists(localFilename))
{
using (WebClient Client = new WebClient())
{
Client.DownloadFile(remoteFilename, localFilename);
}
}
I checked the remote url it was requesting and it is a valid image url that returns an image.
Also, when I step through it with the debugger, I don't get the timeout error.
HELP! ;)
If I were in your shoes, here's a few possibilities I'd investigate:
if you're running this code from multiple threads, you may be bumping up against the System.Net.ServicePointManager.DefaultConnectionLimit property. Try increasing it to 50-100 when you start up your app. note that I don't think this is your problem, but trying this is easier than the other stuff below. :-)
another possibility is that you're swamping the server. This is usually hard to do with a single-threaded client, but is possible since multiple other clients may be hitting the server also. But because the problem always happens at #25, this seems unlikely since you'd expect to see more variation.
you may be running into a problem with keepalive HTTP connections backing up between your client and the server. this also seems unlikely.
the hard cutoff of 25 makes me think that this may be a proxy or firewall limit, either on your end or the server's, where >25 connections made from one client IP to one server (or proxy) will get throttled.
My money is on the latter one, since the fact that it always breaks at a nice round number of requests, and that stepping in the debugger (aka slower!) doesn't trigger the problem.
To test all this, I'd start with the easy thing: stick in a delay (Thread.Sleep) before each HTTP call, and see if the problem goes away. If it does, reduce the delay until the problem comes back. If it doesn't, increase the delay up to a large number (e.g. 10 seconds) until the problem goes away. If it doesn't go away with a 10 second delay, that's truly a mystery and I'd need more info to diagnose.
If it does go away with a delay, then you need to figure out why-- and whether the limit is permanent (e.g. server's firewall which you can't change) or something you can change. To get more info, you'll want to time the requests (e.g. check DateTime.Now before and after each call) to see if you see a pattern. If the timings are all consistent and suddenly get huge, that suggests a network/firewall/proxy throttling. If the timings gradually increase, that suggests a server you're gradually overloading and lengthening its request queue.
In addition to timing the requests, I'd set the timeout of your webclient calls to be longer, so you can figure out if the timeout is infinite or just a bit longer than the default. To do this, you'll need an alternative to the WebClient class, since it doesn't support a timeout. This thread on MSDN Forums has a reasonable alternative code sample.
An alternative to adding timing in your code is to use Fiddler:
download fiddler and start it up.
set your webclient code's Proxy property to point to the fiddler proxy (localhost:8888)
run your app and look at fiddler.
it seems that WebClient is not closing the Response object it uses when done which will cause, in your case, many responses to be opened at the same time and with a limit of 25 connections on the remote server, you got the 'Timeout exception'. When you debug, early opened reponses get closed due to their inner timeout, etc...
(I inpected WebClient that with Reflector, I can't find an instruction for closing the response).
I propse that you use HttpWebRequest & HttpWebResponse so that you can clean objects after each download:
HttpWebRequest request;
HttpWebResponse response = null;
try
{
FileStream fs;
Stream s;
byte[] read;
int count;
read = new byte[256];
request = (HttpWebRequest)WebRequest.Create(remoteFilename);
request.Timeout = 30000;
request.AllowWriteStreamBuffering = false;
response = (HttpWebResponse)request.GetResponse();
s = response.GetResponseStream();
fs = new FileStream(localFilename, FileMode.Create);
while((count = s.Read(read, 0, read.Length))> 0)
{
fs.Write(read, 0, count);
count = s.Read(read, 0, read.Length);
}
fs.Close();
s.Close();
}
catch (System.Net.WebException)
{
//....
}finally
{
//Close Response
if (response != null)
response.Close();
}
Here's a slightly simplified version of manji's answer:
private static void DownloadFile(Uri remoteUri, string localPath)
{
var request = (HttpWebRequest)WebRequest.Create(remoteUri);
request.Timeout = 30000;
request.AllowWriteStreamBuffering = false;
using (var response = (HttpWebResponse)request.GetResponse())
using (var s = response.GetResponseStream())
using (var fs = new FileStream(localPath, FileMode.Create))
{
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = s.Read(buffer, 0, buffer.Length)) > 0)
{
fs.Write(buffer, 0, bytesRead);
bytesRead = s.Read(buffer, 0, buffer.Length);
}
}
}
I have the same problem and I solve it adding this lines to the configuration file app.config:
<system.net>
<connectionManagement>
<add address="*" maxconnection="100" />
</connectionManagement>
</system.net>