Testing different possibilities to download the source of a webpage I got the following results (Average time in ms to google.com, 9gag.com):
Plain HttpWebRequest: 169, 360
Gzip HttpWebRequest: 143, 260
WebClient GetStream : 132, 295
WebClient DownloadString: 143, 389
So for my 9gag client I decided to take the gzip HttpWebRequest. The problem is, after implementing in my actual program, the request takes more than twice the time.
The Problem also occurs when just adding a Thread.Sleep between two requests.
EDIT:
Just improved the code a bit, still the same problem: When running in a loop the requests takes longer when I add an Delay between to requests
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
}
Takes about 250ms per request.
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
Thread.Sleep(1000);
}
Takes about 610ms per request.
private string getWebsite(string Url)
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest)WebRequest.Create(Url);
http.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
string html = string.Empty;
using (HttpWebResponse webResponse = (HttpWebResponse)http.GetResponse())
using (Stream responseStream = webResponse.GetResponseStream())
using (StreamReader reader = new StreamReader(responseStream))
{
html = reader.ReadToEnd();
}
Debug.WriteLine(stopwatch.ElapsedMilliseconds);
return html;
}
Any ideas to fix this problem?
Maybe give this a try, although it might only help your case of a single request and actually make things worse when doing a multithreaded version.
ServicePointManager.UseNagleAlgorithm = false;
Here's a quote from MSDN docs for the HttpWebRequest Class
Another option that can have an impact on performance is the use of
the UseNagleAlgorithm property. When this property is set to true,
TCP/IP will try to use the TCP Nagle algorithm for HTTP connections.
The Nagle algorithm aggregates data when sending TCP packets. It
accumulates sequences of small messages into larger TCP packets before
the data is sent over the network. Using the Nagle algorithm can
optimize the use of network resources, although in some situations
performance can also be degraded. Generally for constant high-volume
throughput, a performance improvement is realized using the Nagle
algorithm. But for smaller throughput applications, degradation in
performance may be seen.
An application doesn't normally need to change the default value for
the UseNagleAlgorithm property which is set to true. However, if an
application is using low-latency connections, it may help to set this
property to false.
I think you might be leaking resources as you aren't disposing of all of your IDisposable object with each method call.
Give this version and try and see if it gives you a more consistent execution time.
public string getWebsite( string Url )
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest) WebRequest.Create( Url );
http.Headers.Add( HttpRequestHeader.AcceptEncoding, "gzip,deflate" );
string html = string.Empty;
using ( HttpWebResponse webResponse = (HttpWebResponse) http.GetResponse() )
{
using ( Stream responseStream = webResponse.GetResponseStream() )
{
Stream decompressedStream = null;
if ( webResponse.ContentEncoding.ToLower().Contains( "gzip" ) )
decompressedStream = new GZipStream( responseStream, CompressionMode.Decompress );
else if ( webResponse.ContentEncoding.ToLower().Contains( "deflate" ) )
decompressedStream = new DeflateStream( responseStream, CompressionMode.Decompress );
if ( decompressedStream != null )
{
using ( StreamReader reader = new StreamReader( decompressedStream, Encoding.Default ) )
{
html = reader.ReadToEnd();
}
decompressedStream.Dispose();
}
}
}
Debug.WriteLine( stopwatch.ElapsedMilliseconds );
return html;
}
Related
I have a piece of code that I have noticed after prolonged use begins to increase the latency on my computer. The requests slowly get longer and longer and through SpeedOf.Me, my latency visibly increases. The only thing that seems to cure the latency is resetting my modem. Am I not correctly closing connections or releasing resources? Why is this happening?
var request = HttpWebRequest.Create(uri);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (GZipStream zip = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress, true))
using (StreamReader unzip = new StreamReader(zip))
{
string str = unzip.ReadToEnd();
}
I coded a .NET C# windows service that runs on our server for a very long time (several months).
Yesterday i checked and i found out it uses 600MB of memory.
I Restarted the service and now it uses 60MB ram.
I've started to check why it is using so much memory.
Will the following function cause a memory leak?
I think its missing .Close() for StreamReader.
As a test , I've run the following function in a loop for 1000 times and i didn't see the memory going up.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string tmp = reader.ReadToEnd();
response.Close();
}
Your code is closing the response, but not the reader.
var tmp = string.Empty;
using(var reader = new StreamReader(response.GetResponseStream())
{
tmp = reader.ReadToEnd();
}
/// do whatever with tmp that you want here...
All objects that implement IDisposable such as WebResponse and StreamReader should be disposed.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
using(var response = request.GetResponse())
using(var reader = new StreamReader(response.GetResponseStream())
string tmp = reader.ReadToEnd();
}
The code does not produce memory leak.
The code is not ideal as everyone points out (will cause closing resources later than you expect), but they will be released when GC get around to run and finalize unused objects.
Are you sure you see memory leak OR you just assume you have one based on some semi-random value? CLR may not free memory used by managed heap even if no objects are allocated, GC may not need to run if you don't have enough memory pressure (especially in x64).
I would suggest a lot more than 1000 iterations if you want to see if the memory would increase. Each iteration would only take up a small bit of memory, if it is your memory leak.
I'm not sure if that is the source of your memory leak, but its good practice to .Close() your StreamReaders when you're done with them.
With StreamReader it's good practice to use 'using' then the IDisposable interface is implemented when the object is no longer in scope.
using (var reader = new StreamReader(FilePath))
{
string tmp = reader.ReadToEnd();
}
As for your issue 1000 times is not very many recursions. Try leaving the app for a couple of hours and clock up a few 100 thousand and this will give you a better indication.
It could, potentially, depends how frequently you use it, cause you don't use esplicit call to Dispose() of Reader. To be sure that you did whatever you can in these lines, write them down like :
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
using(StreamReader reader = new StreamReader(response.GetResponseStream())){
string tmp = reader.ReadToEnd();
response.Close();
}
// here will be called Dispose() of the reader
// automatically whenever there is an exception or not.
}
I have a MonoTouch based iOS universal app. It uses REST services to make calls to get data. I'm using the HttpWebRequest class to build and make my calls. Everything works great, with the exception that it seems to be holding onto memory. I've got usings all over the code to limit the scope of things. I've avoided anonymous delegates as well as I had heard they can be a problem. I have a helper class that builds up my call to my REST service. As I make calls it seems to just hold onto memory from making my calls. I'm curious if anyone has run into similar issues with the HttpWebClient and what to do about it. I'm currently looking to see if I can make a call using an nsMutableRequest and just avoid the HttpWebClient, but am struggling with getting it to work with NTLM authentication. Any advice is appreciated.
protected T IntegrationCall<T,I>(string methodName, I input) {
HttpWebRequest invokeRequest = BuildWebRequest<I>(GetMethodURL(methodName),"POST",input, true);
WebResponse response = invokeRequest.GetResponse();
T result = DeserializeResponseObject<T>((HttpWebResponse)response);
invokeRequest = null;
response = null;
return result;
}
protected HttpWebRequest BuildWebRequest<T>(string url, string method, T requestObject, bool IncludeCredentials)
{
ServicePointManager.ServerCertificateValidationCallback = Validator;
var invokeRequest = WebRequest.Create(url) as HttpWebRequest;
if (invokeRequest == null)
return null;
if (IncludeCredentials)
{
invokeRequest.Credentials = CommonData.IntegrationCredentials;
}
if( !string.IsNullOrEmpty(method) )
invokeRequest.Method = method;
else
invokeRequest.Method = "POST";
invokeRequest.ContentType = "text/xml";
invokeRequest.Timeout = 40000;
using( Stream requestObjectStream = new MemoryStream() )
{
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
using(StreamReader reader = new StreamReader(requestObjectStream))
{
string strTempRequestObject = reader.ReadToEnd();
//byte[] requestBodyBytes = Encoding.UTF8.GetBytes(strTempRequestObject);
Encoding enc = new UTF8Encoding(false);
byte[] requestBodyBytes = enc.GetBytes(strTempRequestObject);
invokeRequest.ContentLength = requestBodyBytes.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
{
postStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
}
}
}
return invokeRequest;
}
Using using is the right thing to do - but your code seems to be duplicating the same content multiple times (which it should not do).
requestObjectStream is turned into a string which is then turned into a byte[] before being written to another stream. And that's without considering what the extra code (e.g. ReadToEnd and UTF8Encoding.GetBytes) might allocate themselves (e.g. like more strings, byte[]...).
So if what you serialize is large then you'll consume a lot of extra memory (for nothing). It's even a bit worse for stringand byte[] since you can't dispose them manually (GC will decide when, making measurement harder).
I would try (but did not ;-) something like:
...
using (Stream requestObjectStream = new MemoryStream ()) {
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
invokeRequest.ContentLength = requestObjectStream.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
requestObjectStream.CopyTo (postStream);
}
...
That would let the MemoryStream copy itself to the request stream. An alternative is to call ToArray to the MemoryStream (but that's another copy of the serialized object that the GC will have to track and free).
I have some code that downloads the content of a webpage that I've been using for a while. This code works fine and has never provided an issue and still doesn't... However, there is a page that is rather large (2MB, no images) with 4 tables with 4, 20, 100, 600 rows respectively and about 20 columns wide.
When trying to get all the data it completes without any apparent errors or exceptions but only returns up to about row 60 in the 4th table - sometimes more, sometimes less. The broswer completes loading in about 20-30 seconds with constant, what seems like flushes, to the page until complete.
I've tried a number of solutions from SO and searches without any different results. Below is the current code, but I've: proxy, async, no timeouts, false keepalive...
I can't use WebClient (as another far-fetch attempt) because I need to login using the cookiecontainer.
HttpWebRequest pageImport = (HttpWebRequest)WebRequest.Create(importUri);
pageImport.ReadWriteTimeout = Int32.MaxValue;
pageImport.Timeout = Int32.MaxValue;
pageImport.UserAgent = "User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3";
pageImport.Accept = "Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
pageImport.KeepAlive = true;
pageImport.Timeout = Int32.MaxValue;
pageImport.ReadWriteTimeout = Int32.MaxValue;
pageImport.MaximumResponseHeadersLength = Int32.MaxValue;
if (null != LoginCookieContainer)
{
pageImport.CookieContainer = LoginCookieContainer;
}
Encoding encode = System.Text.Encoding.GetEncoding("utf-8");
using (WebResponse response = pageImport.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream, encode))
{
stream.Flush();
HtmlRetrieved = reader.ReadToEnd();
}
Try to read block wise instead of reader.ReadToEnd();
Just to give you an idea:
// Pipe the stream to a higher level stream reader with the required encoding format.
StreamReader readStream = new StreamReader( ReceiveStream, encode );
Console.WriteLine("\nResponse stream received");
Char[] read = new Char[256];
// Read 256 charcters at a time.
int count = readStream.Read( read, 0, 256 );
Console.WriteLine("HTML...\r\n");
while (count > 0)
{
// Dump the 256 characters on a string and display the string onto the console.
String str = new String(read, 0, count);
Console.Write(str);
count = readStream.Read(read, 0, 256);
}
I suspect this is handled as a configuration setting on the server side. Incidentally, I think you may be setting your properties incorrectly. Remove the "user-agent" and "accept" from the literals, as such:
pageImport.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3";
pageImport.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
While I'm still going to try the suggestions provided and will change my answer if it works, it seems that in this case, the problem IS the proxy. I got in front of the proxy and the code works as expected and much quicker.
I'll have to look at some proxy optimizations since this code must run behind the proxy.
I have a monitoring system and I want to save a snapshot from a camera when alarm trigger.
I have tried many methods to do that…and it’s all working fine , stream snapshot from the camera then save it as a jpg in the pc…. picture (jpg format,1280*1024,140KB)..That’s fine
But my problem is in the application performance...
The app need about 20 ~30 seconds to read the steam, that’s not acceptable coz that method will be called every 2 second .I need to know what wrong with that code and how I can get it much faster than that. ?
Many thanks in advance
Code:
string sourceURL = "http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT";
byte[] buffer = new byte[200000];
int read, total = 0;
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("admin", "123456");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0,total));
string path = JPGName.Text+".jpg";
bmp.Save(path);
I very much doubt that this code is the cause of the problem, at least for the first method call (but read further below).
Technically, you could produce the Bitmap without saving to a memory buffer first, or if you don't need to display the image as well, you can save the raw data without ever constructing a Bitmap, but that's not going to help in terms of multiple seconds improved performance. Have you checked how long it takes to download the image from that URL using a browser, wget, curl or whatever tool, because I suspect something is going on with the encoding source.
Something you should do is clean up your resources; close the stream properly. This can potentially cause the problem if you call this method regularly, because .NET will only open a few connections to the same host at any one point.
// Make sure the stream gets closed once we're done with it
using (Stream stream = resp.GetResponseStream())
{
// A larger buffer size would be benefitial, but it's not going
// to make a significant difference.
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
}
I cannot try the network behavior of the WebResponse stream, but you handle the stream twice (once in your loop and once with your memory stream).
I don't thing that's the whole problem but I'd give it a try:
string sourceURL = "http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT";
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("admin", "123456");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
Bitmap bmp = (Bitmap)Bitmap.FromStream(stream);
string path = JPGName.Text + ".jpg";
bmp.Save(path);
Try to read bigger pieces of data, than 1000 bytes per time. I can see no problem with, for example,
read = stream.Read(buffer, 0, buffer.Length);
Try this to download the file.
using(WebClient webClient = new WebClient())
{
webClient.DownloadFile("http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT", "c:\\Temp\myPic.jpg");
}
You can use a DateTime to put a unique stamp on the shot.