I coded a .NET C# windows service that runs on our server for a very long time (several months).
Yesterday i checked and i found out it uses 600MB of memory.
I Restarted the service and now it uses 60MB ram.
I've started to check why it is using so much memory.
Will the following function cause a memory leak?
I think its missing .Close() for StreamReader.
As a test , I've run the following function in a loop for 1000 times and i didn't see the memory going up.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string tmp = reader.ReadToEnd();
response.Close();
}
Your code is closing the response, but not the reader.
var tmp = string.Empty;
using(var reader = new StreamReader(response.GetResponseStream())
{
tmp = reader.ReadToEnd();
}
/// do whatever with tmp that you want here...
All objects that implement IDisposable such as WebResponse and StreamReader should be disposed.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
using(var response = request.GetResponse())
using(var reader = new StreamReader(response.GetResponseStream())
string tmp = reader.ReadToEnd();
}
The code does not produce memory leak.
The code is not ideal as everyone points out (will cause closing resources later than you expect), but they will be released when GC get around to run and finalize unused objects.
Are you sure you see memory leak OR you just assume you have one based on some semi-random value? CLR may not free memory used by managed heap even if no objects are allocated, GC may not need to run if you don't have enough memory pressure (especially in x64).
I would suggest a lot more than 1000 iterations if you want to see if the memory would increase. Each iteration would only take up a small bit of memory, if it is your memory leak.
I'm not sure if that is the source of your memory leak, but its good practice to .Close() your StreamReaders when you're done with them.
With StreamReader it's good practice to use 'using' then the IDisposable interface is implemented when the object is no longer in scope.
using (var reader = new StreamReader(FilePath))
{
string tmp = reader.ReadToEnd();
}
As for your issue 1000 times is not very many recursions. Try leaving the app for a couple of hours and clock up a few 100 thousand and this will give you a better indication.
It could, potentially, depends how frequently you use it, cause you don't use esplicit call to Dispose() of Reader. To be sure that you did whatever you can in these lines, write them down like :
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
using(StreamReader reader = new StreamReader(response.GetResponseStream())){
string tmp = reader.ReadToEnd();
response.Close();
}
// here will be called Dispose() of the reader
// automatically whenever there is an exception or not.
}
Related
I have a piece of code that I have noticed after prolonged use begins to increase the latency on my computer. The requests slowly get longer and longer and through SpeedOf.Me, my latency visibly increases. The only thing that seems to cure the latency is resetting my modem. Am I not correctly closing connections or releasing resources? Why is this happening?
var request = HttpWebRequest.Create(uri);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (GZipStream zip = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress, true))
using (StreamReader unzip = new StreamReader(zip))
{
string str = unzip.ReadToEnd();
}
App is talking to a REST service.
Fiddler shows full good XML response coming in as the Apps response
The App is in French Polynesia and an identical copy in NZ works so the prime suspect seemed encoding but we have checked that out and came up empty handed.
Looking at the output string (UTF8 encoding) from the stream reader you can see where it has been truncated. It is in an innocuous piece of xml. The downstream error on an XmlDocument object claims to have an encountered an unexpected end of file while loading the string into an XML Document object which is fair enough.
The truncation point is
ns6:sts-krn>1&
which is part of
ns6:sts-krn>1</ns6:sts-krn><
Is there any size limitation on the response string or some other parameter that we should check.
I am all out of ideas. Code Supplied as requested.
Stream streamResponse = response.GetResponseStream();
StringBuilder sb = new StringBuilder();
Encoding encode = Encoding.GetEncoding("utf-8");
if (streamResponse != null)
{
StreamReader readStream = new StreamReader(streamResponse, encode);
while (readStream.Peek() >= 0)
{
sb.Append((char)readStream.Read());
}
streamResponse.Close();
}
You need to be using using blocks:
using (WebResponse response = request.GetResponse())
{
using (Stream streamResponse = response.GetResponseStream())
{
StringBuilder sb = new StringBuilder();
if (streamResponse != null)
{
using (StreamReader readStream = new StreamReader(streamResponse, Encoding.UTF8))
{
sb.Append(readStream.ReadToEnd());
}
}
}
}
This will ensure that your WebResponse, Stream, and StreamReader all get cleaned up, regardless of whether there are any exceptions.
The reasoning that led me to think about using blocks was:
Some operation was not completed
There were no try/catch blocks hiding exceptions, so if the operation wasn't completed due to exceptions, we would know about it.
There were objects which implement IDisposable which were not in using blocks
Conclusion: try implementing the using blocks to see if disposing the objects will cause the operation to complete.
I added this because the reasoning is actually quite general. The same reasoning works for "my mail message doesn't get sent for two minutes". In that case, the operation which isn't completed is "send email" and the instances are the SmtpClient and MailMessage objects.
An easier way to do this would be:
string responseString;
using (StreamReader readStream = new StreamReader(streamResponse, encode))
{
responseString = readStream.ReadToEnd();
}
For debugging, I would suggest writing that response stream to a file so that you can see exactly what was read. In addition, you might consider using a single byte encoding (like ISO-8859-1) to read the data and write to the file.
You should check the response.ContentType property to see if some other text encoding is used.
I want to convert stream data from response stream into a custom object.
I want to convert respose stream into custom object,I am following these steps.
My code is as follows.
myMethod()
{
state s = new state();
Stream receiveStream;
StreamReader readStream;
HttpWebRequest request;
HttpWebResponse response;
try
{
request = (HttpWebRequest)WebRequest.Create (url);
request.Method = "GET";
request.ContentType = "application/json";
response = (HttpWebResponse)request.GetResponse ();
receiveStream = response.GetResponseStream();
readStream = new StreamReader(receiveStream);
Console.WriteLine (readStream.ReadToEnd());
serializer = new DataContractJsonSerializer(typeof(state));
s = serializer.ReadObject(readStream.BaseStream)as state;
Console.Write(s.name+"\n");
response.Close ();
readStream.Close ();
}
catch (Exception ex)
{
}
}
Object s returning nothing.
Can anyone help me?
The trouble is that you're trying to deserialize an object when you've already read all the data from it just beforehand:
readStream = new StreamReader(receiveStream);
Console.WriteLine (readStream.ReadToEnd());
After those line, the stream will be empty, so there's nothing to deserialize. Get rid of those lines (then use receiveStream below), and you may well find it just works.
Additionally, a few suggestions:
Rather than closing streams explicitly, use using statements
Add a using statement for the response itself, as that implements IDisposable
Keep the scope of each variable as small as it can be, assigning a value at the point of declaration
It's rarely a good idea to catch Exception, and it's almost never a good idea to just swallow exceptions in the way that you are doing, with no logging etc
Follow .NET naming conventions, where state would be State (and possibly make the name a bit more descriptive anyway)
Use a cast rather than as - see my blog post on the topic for reasons
Testing different possibilities to download the source of a webpage I got the following results (Average time in ms to google.com, 9gag.com):
Plain HttpWebRequest: 169, 360
Gzip HttpWebRequest: 143, 260
WebClient GetStream : 132, 295
WebClient DownloadString: 143, 389
So for my 9gag client I decided to take the gzip HttpWebRequest. The problem is, after implementing in my actual program, the request takes more than twice the time.
The Problem also occurs when just adding a Thread.Sleep between two requests.
EDIT:
Just improved the code a bit, still the same problem: When running in a loop the requests takes longer when I add an Delay between to requests
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
}
Takes about 250ms per request.
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
Thread.Sleep(1000);
}
Takes about 610ms per request.
private string getWebsite(string Url)
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest)WebRequest.Create(Url);
http.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
string html = string.Empty;
using (HttpWebResponse webResponse = (HttpWebResponse)http.GetResponse())
using (Stream responseStream = webResponse.GetResponseStream())
using (StreamReader reader = new StreamReader(responseStream))
{
html = reader.ReadToEnd();
}
Debug.WriteLine(stopwatch.ElapsedMilliseconds);
return html;
}
Any ideas to fix this problem?
Maybe give this a try, although it might only help your case of a single request and actually make things worse when doing a multithreaded version.
ServicePointManager.UseNagleAlgorithm = false;
Here's a quote from MSDN docs for the HttpWebRequest Class
Another option that can have an impact on performance is the use of
the UseNagleAlgorithm property. When this property is set to true,
TCP/IP will try to use the TCP Nagle algorithm for HTTP connections.
The Nagle algorithm aggregates data when sending TCP packets. It
accumulates sequences of small messages into larger TCP packets before
the data is sent over the network. Using the Nagle algorithm can
optimize the use of network resources, although in some situations
performance can also be degraded. Generally for constant high-volume
throughput, a performance improvement is realized using the Nagle
algorithm. But for smaller throughput applications, degradation in
performance may be seen.
An application doesn't normally need to change the default value for
the UseNagleAlgorithm property which is set to true. However, if an
application is using low-latency connections, it may help to set this
property to false.
I think you might be leaking resources as you aren't disposing of all of your IDisposable object with each method call.
Give this version and try and see if it gives you a more consistent execution time.
public string getWebsite( string Url )
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest) WebRequest.Create( Url );
http.Headers.Add( HttpRequestHeader.AcceptEncoding, "gzip,deflate" );
string html = string.Empty;
using ( HttpWebResponse webResponse = (HttpWebResponse) http.GetResponse() )
{
using ( Stream responseStream = webResponse.GetResponseStream() )
{
Stream decompressedStream = null;
if ( webResponse.ContentEncoding.ToLower().Contains( "gzip" ) )
decompressedStream = new GZipStream( responseStream, CompressionMode.Decompress );
else if ( webResponse.ContentEncoding.ToLower().Contains( "deflate" ) )
decompressedStream = new DeflateStream( responseStream, CompressionMode.Decompress );
if ( decompressedStream != null )
{
using ( StreamReader reader = new StreamReader( decompressedStream, Encoding.Default ) )
{
html = reader.ReadToEnd();
}
decompressedStream.Dispose();
}
}
}
Debug.WriteLine( stopwatch.ElapsedMilliseconds );
return html;
}
I have a MonoTouch based iOS universal app. It uses REST services to make calls to get data. I'm using the HttpWebRequest class to build and make my calls. Everything works great, with the exception that it seems to be holding onto memory. I've got usings all over the code to limit the scope of things. I've avoided anonymous delegates as well as I had heard they can be a problem. I have a helper class that builds up my call to my REST service. As I make calls it seems to just hold onto memory from making my calls. I'm curious if anyone has run into similar issues with the HttpWebClient and what to do about it. I'm currently looking to see if I can make a call using an nsMutableRequest and just avoid the HttpWebClient, but am struggling with getting it to work with NTLM authentication. Any advice is appreciated.
protected T IntegrationCall<T,I>(string methodName, I input) {
HttpWebRequest invokeRequest = BuildWebRequest<I>(GetMethodURL(methodName),"POST",input, true);
WebResponse response = invokeRequest.GetResponse();
T result = DeserializeResponseObject<T>((HttpWebResponse)response);
invokeRequest = null;
response = null;
return result;
}
protected HttpWebRequest BuildWebRequest<T>(string url, string method, T requestObject, bool IncludeCredentials)
{
ServicePointManager.ServerCertificateValidationCallback = Validator;
var invokeRequest = WebRequest.Create(url) as HttpWebRequest;
if (invokeRequest == null)
return null;
if (IncludeCredentials)
{
invokeRequest.Credentials = CommonData.IntegrationCredentials;
}
if( !string.IsNullOrEmpty(method) )
invokeRequest.Method = method;
else
invokeRequest.Method = "POST";
invokeRequest.ContentType = "text/xml";
invokeRequest.Timeout = 40000;
using( Stream requestObjectStream = new MemoryStream() )
{
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
using(StreamReader reader = new StreamReader(requestObjectStream))
{
string strTempRequestObject = reader.ReadToEnd();
//byte[] requestBodyBytes = Encoding.UTF8.GetBytes(strTempRequestObject);
Encoding enc = new UTF8Encoding(false);
byte[] requestBodyBytes = enc.GetBytes(strTempRequestObject);
invokeRequest.ContentLength = requestBodyBytes.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
{
postStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
}
}
}
return invokeRequest;
}
Using using is the right thing to do - but your code seems to be duplicating the same content multiple times (which it should not do).
requestObjectStream is turned into a string which is then turned into a byte[] before being written to another stream. And that's without considering what the extra code (e.g. ReadToEnd and UTF8Encoding.GetBytes) might allocate themselves (e.g. like more strings, byte[]...).
So if what you serialize is large then you'll consume a lot of extra memory (for nothing). It's even a bit worse for stringand byte[] since you can't dispose them manually (GC will decide when, making measurement harder).
I would try (but did not ;-) something like:
...
using (Stream requestObjectStream = new MemoryStream ()) {
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
invokeRequest.ContentLength = requestObjectStream.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
requestObjectStream.CopyTo (postStream);
}
...
That would let the MemoryStream copy itself to the request stream. An alternative is to call ToArray to the MemoryStream (but that's another copy of the serialized object that the GC will have to track and free).