HttpWebRequest increases latency - c#

I have a piece of code that I have noticed after prolonged use begins to increase the latency on my computer. The requests slowly get longer and longer and through SpeedOf.Me, my latency visibly increases. The only thing that seems to cure the latency is resetting my modem. Am I not correctly closing connections or releasing resources? Why is this happening?
var request = HttpWebRequest.Create(uri);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (GZipStream zip = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress, true))
using (StreamReader unzip = new StreamReader(zip))
{
string str = unzip.ReadToEnd();
}

Related

Why does .NET Core HttpClient or WebClient take so much longer than Python

I am trying to retrieve content from a URL, I have tried .NET Core's HttpClient and WebClient, both of which take ~10 seconds to load this specific website.
However, when I use Python's urllib.request it loads within the same second. I have tried pretty much all the different combinations including: DownloadString, GetStringAsync, GetStreamAsync, GetAsync, OpenRead, etc.
I can provide the specific URL if needed. Any possible ideas?
Attempt #1
WebClient client = new WebClient();
Stream data = client.OpenRead("https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/dtpp/search/");
StreamReader reader = new StreamReader(data);
string s = await reader.ReadToEndAsync();
data.Close();
reader.Close();
return s;
Attempt #2
using (var wc = new HttpClient())
{
var test = wc.GetAsync("https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/dtpp/search/").Result;
var contents = test.Content.ReadAsStringAsync().Result;
return contents;
}
Attempt #3
using (var wc = new HttpClient())
{
HTML = await wc.GetStringAsync("https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/dtpp/search/");
return HTML;
}
All three attempts work, just take ~10 seconds everytime. If I run this the same sort of thing in Python it returns within the same second.
Python Version
with urllib.request.urlopen('https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/dtpp/search/') as response:
html = response.read()

HTTPWebResponse Response string is truncated

App is talking to a REST service.
Fiddler shows full good XML response coming in as the Apps response
The App is in French Polynesia and an identical copy in NZ works so the prime suspect seemed encoding but we have checked that out and came up empty handed.
Looking at the output string (UTF8 encoding) from the stream reader you can see where it has been truncated. It is in an innocuous piece of xml. The downstream error on an XmlDocument object claims to have an encountered an unexpected end of file while loading the string into an XML Document object which is fair enough.
The truncation point is
ns6:sts-krn>1&
which is part of
ns6:sts-krn>1</ns6:sts-krn><
Is there any size limitation on the response string or some other parameter that we should check.
I am all out of ideas. Code Supplied as requested.
Stream streamResponse = response.GetResponseStream();
StringBuilder sb = new StringBuilder();
Encoding encode = Encoding.GetEncoding("utf-8");
if (streamResponse != null)
{
StreamReader readStream = new StreamReader(streamResponse, encode);
while (readStream.Peek() >= 0)
{
sb.Append((char)readStream.Read());
}
streamResponse.Close();
}
You need to be using using blocks:
using (WebResponse response = request.GetResponse())
{
using (Stream streamResponse = response.GetResponseStream())
{
StringBuilder sb = new StringBuilder();
if (streamResponse != null)
{
using (StreamReader readStream = new StreamReader(streamResponse, Encoding.UTF8))
{
sb.Append(readStream.ReadToEnd());
}
}
}
}
This will ensure that your WebResponse, Stream, and StreamReader all get cleaned up, regardless of whether there are any exceptions.
The reasoning that led me to think about using blocks was:
Some operation was not completed
There were no try/catch blocks hiding exceptions, so if the operation wasn't completed due to exceptions, we would know about it.
There were objects which implement IDisposable which were not in using blocks
Conclusion: try implementing the using blocks to see if disposing the objects will cause the operation to complete.
I added this because the reasoning is actually quite general. The same reasoning works for "my mail message doesn't get sent for two minutes". In that case, the operation which isn't completed is "send email" and the instances are the SmtpClient and MailMessage objects.
An easier way to do this would be:
string responseString;
using (StreamReader readStream = new StreamReader(streamResponse, encode))
{
responseString = readStream.ReadToEnd();
}
For debugging, I would suggest writing that response stream to a file so that you can see exactly what was read. In addition, you might consider using a single byte encoding (like ISO-8859-1) to read the data and write to the file.
You should check the response.ContentType property to see if some other text encoding is used.

HttpWebRequest gets slower when adding an Interval

Testing different possibilities to download the source of a webpage I got the following results (Average time in ms to google.com, 9gag.com):
Plain HttpWebRequest: 169, 360
Gzip HttpWebRequest: 143, 260
WebClient GetStream : 132, 295
WebClient DownloadString: 143, 389
So for my 9gag client I decided to take the gzip HttpWebRequest. The problem is, after implementing in my actual program, the request takes more than twice the time.
The Problem also occurs when just adding a Thread.Sleep between two requests.
EDIT:
Just improved the code a bit, still the same problem: When running in a loop the requests takes longer when I add an Delay between to requests
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
}
Takes about 250ms per request.
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
Thread.Sleep(1000);
}
Takes about 610ms per request.
private string getWebsite(string Url)
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest)WebRequest.Create(Url);
http.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
string html = string.Empty;
using (HttpWebResponse webResponse = (HttpWebResponse)http.GetResponse())
using (Stream responseStream = webResponse.GetResponseStream())
using (StreamReader reader = new StreamReader(responseStream))
{
html = reader.ReadToEnd();
}
Debug.WriteLine(stopwatch.ElapsedMilliseconds);
return html;
}
Any ideas to fix this problem?
Maybe give this a try, although it might only help your case of a single request and actually make things worse when doing a multithreaded version.
ServicePointManager.UseNagleAlgorithm = false;
Here's a quote from MSDN docs for the HttpWebRequest Class
Another option that can have an impact on performance is the use of
the UseNagleAlgorithm property. When this property is set to true,
TCP/IP will try to use the TCP Nagle algorithm for HTTP connections.
The Nagle algorithm aggregates data when sending TCP packets. It
accumulates sequences of small messages into larger TCP packets before
the data is sent over the network. Using the Nagle algorithm can
optimize the use of network resources, although in some situations
performance can also be degraded. Generally for constant high-volume
throughput, a performance improvement is realized using the Nagle
algorithm. But for smaller throughput applications, degradation in
performance may be seen.
An application doesn't normally need to change the default value for
the UseNagleAlgorithm property which is set to true. However, if an
application is using low-latency connections, it may help to set this
property to false.
I think you might be leaking resources as you aren't disposing of all of your IDisposable object with each method call.
Give this version and try and see if it gives you a more consistent execution time.
public string getWebsite( string Url )
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest) WebRequest.Create( Url );
http.Headers.Add( HttpRequestHeader.AcceptEncoding, "gzip,deflate" );
string html = string.Empty;
using ( HttpWebResponse webResponse = (HttpWebResponse) http.GetResponse() )
{
using ( Stream responseStream = webResponse.GetResponseStream() )
{
Stream decompressedStream = null;
if ( webResponse.ContentEncoding.ToLower().Contains( "gzip" ) )
decompressedStream = new GZipStream( responseStream, CompressionMode.Decompress );
else if ( webResponse.ContentEncoding.ToLower().Contains( "deflate" ) )
decompressedStream = new DeflateStream( responseStream, CompressionMode.Decompress );
if ( decompressedStream != null )
{
using ( StreamReader reader = new StreamReader( decompressedStream, Encoding.Default ) )
{
html = reader.ReadToEnd();
}
decompressedStream.Dispose();
}
}
}
Debug.WriteLine( stopwatch.ElapsedMilliseconds );
return html;
}

C# StreamReader Close - Memory leak?

I coded a .NET C# windows service that runs on our server for a very long time (several months).
Yesterday i checked and i found out it uses 600MB of memory.
I Restarted the service and now it uses 60MB ram.
I've started to check why it is using so much memory.
Will the following function cause a memory leak?
I think its missing .Close() for StreamReader.
As a test , I've run the following function in a loop for 1000 times and i didn't see the memory going up.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string tmp = reader.ReadToEnd();
response.Close();
}
Your code is closing the response, but not the reader.
var tmp = string.Empty;
using(var reader = new StreamReader(response.GetResponseStream())
{
tmp = reader.ReadToEnd();
}
/// do whatever with tmp that you want here...
All objects that implement IDisposable such as WebResponse and StreamReader should be disposed.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
using(var response = request.GetResponse())
using(var reader = new StreamReader(response.GetResponseStream())
string tmp = reader.ReadToEnd();
}
The code does not produce memory leak.
The code is not ideal as everyone points out (will cause closing resources later than you expect), but they will be released when GC get around to run and finalize unused objects.
Are you sure you see memory leak OR you just assume you have one based on some semi-random value? CLR may not free memory used by managed heap even if no objects are allocated, GC may not need to run if you don't have enough memory pressure (especially in x64).
I would suggest a lot more than 1000 iterations if you want to see if the memory would increase. Each iteration would only take up a small bit of memory, if it is your memory leak.
I'm not sure if that is the source of your memory leak, but its good practice to .Close() your StreamReaders when you're done with them.
With StreamReader it's good practice to use 'using' then the IDisposable interface is implemented when the object is no longer in scope.
using (var reader = new StreamReader(FilePath))
{
string tmp = reader.ReadToEnd();
}
As for your issue 1000 times is not very many recursions. Try leaving the app for a couple of hours and clock up a few 100 thousand and this will give you a better indication.
It could, potentially, depends how frequently you use it, cause you don't use esplicit call to Dispose() of Reader. To be sure that you did whatever you can in these lines, write them down like :
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
using(StreamReader reader = new StreamReader(response.GetResponseStream())){
string tmp = reader.ReadToEnd();
response.Close();
}
// here will be called Dispose() of the reader
// automatically whenever there is an exception or not.
}

Slow performance in reading from stream .NET

I have a monitoring system and I want to save a snapshot from a camera when alarm trigger.
I have tried many methods to do that…and it’s all working fine , stream snapshot from the camera then save it as a jpg in the pc…. picture (jpg format,1280*1024,140KB)..That’s fine
But my problem is in the application performance...
The app need about 20 ~30 seconds to read the steam, that’s not acceptable coz that method will be called every 2 second .I need to know what wrong with that code and how I can get it much faster than that. ?
Many thanks in advance
Code:
string sourceURL = "http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT";
byte[] buffer = new byte[200000];
int read, total = 0;
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("admin", "123456");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0,total));
string path = JPGName.Text+".jpg";
bmp.Save(path);
I very much doubt that this code is the cause of the problem, at least for the first method call (but read further below).
Technically, you could produce the Bitmap without saving to a memory buffer first, or if you don't need to display the image as well, you can save the raw data without ever constructing a Bitmap, but that's not going to help in terms of multiple seconds improved performance. Have you checked how long it takes to download the image from that URL using a browser, wget, curl or whatever tool, because I suspect something is going on with the encoding source.
Something you should do is clean up your resources; close the stream properly. This can potentially cause the problem if you call this method regularly, because .NET will only open a few connections to the same host at any one point.
// Make sure the stream gets closed once we're done with it
using (Stream stream = resp.GetResponseStream())
{
// A larger buffer size would be benefitial, but it's not going
// to make a significant difference.
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
}
I cannot try the network behavior of the WebResponse stream, but you handle the stream twice (once in your loop and once with your memory stream).
I don't thing that's the whole problem but I'd give it a try:
string sourceURL = "http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT";
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("admin", "123456");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
Bitmap bmp = (Bitmap)Bitmap.FromStream(stream);
string path = JPGName.Text + ".jpg";
bmp.Save(path);
Try to read bigger pieces of data, than 1000 bytes per time. I can see no problem with, for example,
read = stream.Read(buffer, 0, buffer.Length);
Try this to download the file.
using(WebClient webClient = new WebClient())
{
webClient.DownloadFile("http://192.168.0.211/cgi-bin/cmd/encoder?SNAPSHOT", "c:\\Temp\myPic.jpg");
}
You can use a DateTime to put a unique stamp on the shot.

Categories