HttpWebClient has High Memory Use in MonoTouch - c#

I have a MonoTouch based iOS universal app. It uses REST services to make calls to get data. I'm using the HttpWebRequest class to build and make my calls. Everything works great, with the exception that it seems to be holding onto memory. I've got usings all over the code to limit the scope of things. I've avoided anonymous delegates as well as I had heard they can be a problem. I have a helper class that builds up my call to my REST service. As I make calls it seems to just hold onto memory from making my calls. I'm curious if anyone has run into similar issues with the HttpWebClient and what to do about it. I'm currently looking to see if I can make a call using an nsMutableRequest and just avoid the HttpWebClient, but am struggling with getting it to work with NTLM authentication. Any advice is appreciated.
protected T IntegrationCall<T,I>(string methodName, I input) {
HttpWebRequest invokeRequest = BuildWebRequest<I>(GetMethodURL(methodName),"POST",input, true);
WebResponse response = invokeRequest.GetResponse();
T result = DeserializeResponseObject<T>((HttpWebResponse)response);
invokeRequest = null;
response = null;
return result;
}
protected HttpWebRequest BuildWebRequest<T>(string url, string method, T requestObject, bool IncludeCredentials)
{
ServicePointManager.ServerCertificateValidationCallback = Validator;
var invokeRequest = WebRequest.Create(url) as HttpWebRequest;
if (invokeRequest == null)
return null;
if (IncludeCredentials)
{
invokeRequest.Credentials = CommonData.IntegrationCredentials;
}
if( !string.IsNullOrEmpty(method) )
invokeRequest.Method = method;
else
invokeRequest.Method = "POST";
invokeRequest.ContentType = "text/xml";
invokeRequest.Timeout = 40000;
using( Stream requestObjectStream = new MemoryStream() )
{
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
using(StreamReader reader = new StreamReader(requestObjectStream))
{
string strTempRequestObject = reader.ReadToEnd();
//byte[] requestBodyBytes = Encoding.UTF8.GetBytes(strTempRequestObject);
Encoding enc = new UTF8Encoding(false);
byte[] requestBodyBytes = enc.GetBytes(strTempRequestObject);
invokeRequest.ContentLength = requestBodyBytes.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
{
postStream.Write(requestBodyBytes, 0, requestBodyBytes.Length);
}
}
}
return invokeRequest;
}

Using using is the right thing to do - but your code seems to be duplicating the same content multiple times (which it should not do).
requestObjectStream is turned into a string which is then turned into a byte[] before being written to another stream. And that's without considering what the extra code (e.g. ReadToEnd and UTF8Encoding.GetBytes) might allocate themselves (e.g. like more strings, byte[]...).
So if what you serialize is large then you'll consume a lot of extra memory (for nothing). It's even a bit worse for stringand byte[] since you can't dispose them manually (GC will decide when, making measurement harder).
I would try (but did not ;-) something like:
...
using (Stream requestObjectStream = new MemoryStream ()) {
DataContractSerializer serializedObject = new DataContractSerializer(typeof(T));
serializedObject.WriteObject(requestObjectStream, requestObject);
requestObjectStream.Position = 0;
invokeRequest.ContentLength = requestObjectStream.Length;
using (Stream postStream = invokeRequest.GetRequestStream())
requestObjectStream.CopyTo (postStream);
}
...
That would let the MemoryStream copy itself to the request stream. An alternative is to call ToArray to the MemoryStream (but that's another copy of the serialized object that the GC will have to track and free).

Related

HttpWebResponse returns an 'incomplete' stream

I´m making repeated requests to a web server using HttpWebRequest, but I randomly get a 'broken' response stream in return. e.g it doesn´t contain the tags that I KNOW is supposed to be there. If I request the same page multiple times in a row it turns up 'broken' ~3/5.
The request always returns a 200 response so I first thought there was a null value inserted in the response that made the StreamReader think it reached the end.
I´ve tried:
1) reading everything into a byte array and cleaning it
2) inserting a random Thread.Sleep after each request
Is there any potentially bad practice with my code below or can anyone tell me why I´m randomly getting an incomplete response stream? As far as I can see I´m closing all unmanaged resources so that shouldn´t be a problem, right?
public string ReturnHtmlResponse(string url)
{
string result;
var request = (HttpWebRequest)WebRequest.Create(url);
{
using(var response = (HttpWebResponse)request.GetResponse())
{
Console.WriteLine((int)response.StatusCode);
var encoding = Encoding.GetEncoding(response.CharacterSet);
using(var stream = response.GetResponseStream())
{
using(var sr = new StreamReader(stream,encoding))
{
result = sr.ReadToEnd();
}
}
}
}
return result;
}
I do not see any direct flaws in you're code. What could be is that one of the 'Parent' using statements is done before the nested one. Try changing the using to a Dispose() and Close() method.
public string ReturnHtmlResponse(string url)
{
string result;
var request = (HttpWebRequest)WebRequest.Create(url);
var response = (HttpWebResponse)request.GetResponse();
Console.WriteLine((int)response.StatusCode);
var encoding = Encoding.GetEncoding(response.CharacterSet);
var stream = response.GetResponseStream();
var sr = new StreamReader(stream,encoding);
result = sr.ReadToEnd();
sr.Close();
stream.Close();
response.Close();
sr.Dispose();
stream.Dispose();
response.Dispose();
return result;
}

C# exception "This stream does not support seek operations." for HttpWebRequest method "PUT"

I'm using PUT method to update some data. But my below code is not working.
The code:
var schemaRequest = WebRequest.Create(new Uri(SchemaUri)) as HttpWebRequest;
schemaRequest.Method = "PUT";
schemaRequest.ContentType = "text/xml";
schemaRequest.Credentials = CredentialCache.DefaultNetworkCredentials;
schemaRequest.Proxy = WebRequest.DefaultWebProxy;
schemaRequest.AddRange(1024);
string test = "<ArrayOfUpdateNodeRequest> <UpdateNodeRequest> <Description>vijay</Description> <Name>Publishing</Name></UpdateNodeRequest></ArrayOfUpdateNodeRequest>";
byte[] arr = new byte[1024];
arr = System.Text.Encoding.UTF8.GetBytes(test);
schemaRequest.ContentLength = arr.Length;
using (var dataStream = schemaRequest.GetRequestStream())
{
dataStream.Write(arr, 0, arr.Length);
}
I'm getting the exception "This stream does not support seek operations." at GetRequestStream().
The exception is pretty clear, the steam doesn't support seeking. Looking at the object in the debugger does not mean you need to seek--if you do, please provide an example. You should simply be able to write to the stream for it to be sent to the host. It doesn't make sense to be able to seek when you're sending a stream to a host (e.g, how do you seek back before a byte you've already sent over the wire to the host?).
If you need to seek locally, before sending to the host, create a memory stream and seek that way. For example:
using (MemoryStream memoryStream new MemoryStream())
{
// ... writes
memoryStream.Seek(0, SeekOrigin.Begin);
//... writes
memoryStream.CopyTo(schemaRequest.GetRequestStream());
}

HttpWebRequest gets slower when adding an Interval

Testing different possibilities to download the source of a webpage I got the following results (Average time in ms to google.com, 9gag.com):
Plain HttpWebRequest: 169, 360
Gzip HttpWebRequest: 143, 260
WebClient GetStream : 132, 295
WebClient DownloadString: 143, 389
So for my 9gag client I decided to take the gzip HttpWebRequest. The problem is, after implementing in my actual program, the request takes more than twice the time.
The Problem also occurs when just adding a Thread.Sleep between two requests.
EDIT:
Just improved the code a bit, still the same problem: When running in a loop the requests takes longer when I add an Delay between to requests
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
}
Takes about 250ms per request.
for(int i = 0; i < 100; i++)
{
getWebsite("http://9gag.com/");
Thread.Sleep(1000);
}
Takes about 610ms per request.
private string getWebsite(string Url)
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest)WebRequest.Create(Url);
http.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
string html = string.Empty;
using (HttpWebResponse webResponse = (HttpWebResponse)http.GetResponse())
using (Stream responseStream = webResponse.GetResponseStream())
using (StreamReader reader = new StreamReader(responseStream))
{
html = reader.ReadToEnd();
}
Debug.WriteLine(stopwatch.ElapsedMilliseconds);
return html;
}
Any ideas to fix this problem?
Maybe give this a try, although it might only help your case of a single request and actually make things worse when doing a multithreaded version.
ServicePointManager.UseNagleAlgorithm = false;
Here's a quote from MSDN docs for the HttpWebRequest Class
Another option that can have an impact on performance is the use of
the UseNagleAlgorithm property. When this property is set to true,
TCP/IP will try to use the TCP Nagle algorithm for HTTP connections.
The Nagle algorithm aggregates data when sending TCP packets. It
accumulates sequences of small messages into larger TCP packets before
the data is sent over the network. Using the Nagle algorithm can
optimize the use of network resources, although in some situations
performance can also be degraded. Generally for constant high-volume
throughput, a performance improvement is realized using the Nagle
algorithm. But for smaller throughput applications, degradation in
performance may be seen.
An application doesn't normally need to change the default value for
the UseNagleAlgorithm property which is set to true. However, if an
application is using low-latency connections, it may help to set this
property to false.
I think you might be leaking resources as you aren't disposing of all of your IDisposable object with each method call.
Give this version and try and see if it gives you a more consistent execution time.
public string getWebsite( string Url )
{
Stopwatch stopwatch = Stopwatch.StartNew();
HttpWebRequest http = (HttpWebRequest) WebRequest.Create( Url );
http.Headers.Add( HttpRequestHeader.AcceptEncoding, "gzip,deflate" );
string html = string.Empty;
using ( HttpWebResponse webResponse = (HttpWebResponse) http.GetResponse() )
{
using ( Stream responseStream = webResponse.GetResponseStream() )
{
Stream decompressedStream = null;
if ( webResponse.ContentEncoding.ToLower().Contains( "gzip" ) )
decompressedStream = new GZipStream( responseStream, CompressionMode.Decompress );
else if ( webResponse.ContentEncoding.ToLower().Contains( "deflate" ) )
decompressedStream = new DeflateStream( responseStream, CompressionMode.Decompress );
if ( decompressedStream != null )
{
using ( StreamReader reader = new StreamReader( decompressedStream, Encoding.Default ) )
{
html = reader.ReadToEnd();
}
decompressedStream.Dispose();
}
}
}
Debug.WriteLine( stopwatch.ElapsedMilliseconds );
return html;
}

C# StreamReader Close - Memory leak?

I coded a .NET C# windows service that runs on our server for a very long time (several months).
Yesterday i checked and i found out it uses 600MB of memory.
I Restarted the service and now it uses 60MB ram.
I've started to check why it is using so much memory.
Will the following function cause a memory leak?
I think its missing .Close() for StreamReader.
As a test , I've run the following function in a loop for 1000 times and i didn't see the memory going up.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string tmp = reader.ReadToEnd();
response.Close();
}
Your code is closing the response, but not the reader.
var tmp = string.Empty;
using(var reader = new StreamReader(response.GetResponseStream())
{
tmp = reader.ReadToEnd();
}
/// do whatever with tmp that you want here...
All objects that implement IDisposable such as WebResponse and StreamReader should be disposed.
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
using(var response = request.GetResponse())
using(var reader = new StreamReader(response.GetResponseStream())
string tmp = reader.ReadToEnd();
}
The code does not produce memory leak.
The code is not ideal as everyone points out (will cause closing resources later than you expect), but they will be released when GC get around to run and finalize unused objects.
Are you sure you see memory leak OR you just assume you have one based on some semi-random value? CLR may not free memory used by managed heap even if no objects are allocated, GC may not need to run if you don't have enough memory pressure (especially in x64).
I would suggest a lot more than 1000 iterations if you want to see if the memory would increase. Each iteration would only take up a small bit of memory, if it is your memory leak.
I'm not sure if that is the source of your memory leak, but its good practice to .Close() your StreamReaders when you're done with them.
With StreamReader it's good practice to use 'using' then the IDisposable interface is implemented when the object is no longer in scope.
using (var reader = new StreamReader(FilePath))
{
string tmp = reader.ReadToEnd();
}
As for your issue 1000 times is not very many recursions. Try leaving the app for a couple of hours and clock up a few 100 thousand and this will give you a better indication.
It could, potentially, depends how frequently you use it, cause you don't use esplicit call to Dispose() of Reader. To be sure that you did whatever you can in these lines, write them down like :
private static string GetTemplate(string queryparams)
{
WebRequest request = HttpWebRequest.Create(uri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
using(StreamReader reader = new StreamReader(response.GetResponseStream())){
string tmp = reader.ReadToEnd();
response.Close();
}
// here will be called Dispose() of the reader
// automatically whenever there is an exception or not.
}

SOAP to Stream to String

I have a SOAP object that I want to capture as a string. This is what I have now:
RateRequest request = new RateRequest();
//Do some stuff to request here
SoapFormatter soapFormat = new SoapFormatter();
using (MemoryStream myStream = new MemoryStream())
{
soapFormat.Serialize(myStream, request);
myStream.Position = 0;
using (StreamReader sr = new StreamReader(myStream))
{
string reqString = sr.ReadToEnd();
}
}
Is there a more elegant way to do this? I don't care that much about the resulting string format - just so it's human readable. XML is fine.
No, that's pretty much the way to do it. You could always factor this out to a method which will do this work for you, and then you can just reduce it to a single call where you need it.
I think you can also do this:
soapFormat.Serialize(myStream, request);
string xml=System.Text.ASCIIEncoding.ASCII.GetString(myStream.GetBuffer());

Categories