Getting hit with a OutOfMemoryException unhandled.
using(var httpclient = new HttpClient(httpClientHandler))
{
httpclient.DefaultRequestHeaders.AcceptEncoding.Add(new StringWithQualityHeaderValue("gzip"));
httpclient.DefaultRequestHeaders.AcceptEncoding.Add(new StringWithQualityHeaderValue("deflate"));
var request = new HttpRequestMessage(HttpMethod.Post, url);
request.Content = new FormUrlEncodedContent(parameters);
var response = await httpclient.SendAsync(request);
var contents = await response.Content.ReadAsStringAsync();
var source = contents.ToString();
return source;
}
I'm not really sure what to do, or what is the specific cause, I believe it has something to do with " await response.Content.ReadAsStringAsync();
someone suggested to use
ReadAsStreamAsync();
instead and output to a file, however I need to output as a string to " source " so I can analyse the data in another function..
I would also like to add I'm running threads..
Is it possible the
Response.Content
is being stored in the memory even after it has finished that specific function? Do i need to dispose/clear memory or contents after I've returned it to source?
The advised solution is correct. it seems that the OS memory which your application is hosted on does not have enough memory. In order to workaround this It is wise to write stream as file to disk instead of memory. TransferEncodingChunked may also help but sender need to support it.
using (FileStream fs = File.Create("fileName.ext")
{
....
var stream = await response.Content.ReadAsStreamAsync();
await fs.CopyToAsync(stream);
fs.Close();
}
Related
I try to download and parallel upload streams (.ts, .mkv, .avi) from one source (locally limited) to an ASP.NET MVC response, so I can access the stream without any manipulation from outside.
I have this code so far.
using (HttpClient client = new HttpClient())
{
using (HttpResponseMessage response = client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead).Result)
using (Stream streamToReadFrom = response.Content.ReadAsStreamAsync().Result)
{
using (var returnStream = new MemoryStream())
{
streamToReadFrom.CopyToAsync(returnStream);
return new FileStreamResult(returnStream, "video/mp2t");
}
}
}
I thought I can just pass the download stream onto the response stream, but I got stuck with the asynchronous part. The system is responding with the following:
ObjectDisposedException: Cannot access a closed Stream.
Does anyone have an idea what I have to change, to get this running?
I found a second solution for my problem which seems to work sort of. At least the code wont crash. But when I try to save the file, it wont have any data (0kB)
var client = new HttpClient();
var result = await client.GetAsync(url);
var stream = await result.Content.ReadAsStreamAsync();
return new FileStreamResult(stream, "video/mp2t");
But now I can't access the stream. I am not sure why that is.
I have set the action to asynchronous, to use the code.
I am not sure if I have to change the result stream or add some header information or what to do, to get it work.
Currently I am using a two step approach to fetch data from the Web Api and
deserialize the CSV records into objects.
var response = await httpClient.GetAsync(queryString);
using (var reader = new StreamReader(await response.Content.ReadAsStreamAsync()))
{
var csvr = new CsvReader(reader);
var responseValue = csvr.GetRecords<TrainingDataSetData>().ToList();
result.Readings.AddRange(responseValue);
}
How do I optimize this code?
If you're trying to avoid creating an intermediate MemoryStream - you could use the GetStreamAsync method on HttpClient, which should return the raw NetworkStream for you pass straight into CsvHelper, instead of ReadAsStreamAsync, which will default to reading the full response into a MemoryStream before returning.
using (var reader = new StreamReader(await httpClient.GetStreamAsync(queryString)))
{
var csvr = new CsvReader(reader);
var responseValue = csvr.GetRecords<TrainingDataSetData>().ToList();
result.Readings.AddRange(responseValue);
}
If you still need access to the HttpResponseMessage, you could achieve the same effect by using HttpCompletionOption.ResponseHeadersRead, which won't buffer the response before returning.
var response = await httpClient.GetAsync(queryString, HttpCompletionOption.ResponseHeadersRead);
As to whether or not this is actually more efficient - this is something that would require benchmarking in your particular environment to decide, since it may be conditional on the size of the response, speed of the network etc.
I'm working on a windows client for uploading a lot of small files over an http post request.
I’m using .NET 4.5.2
public async void Upload3(HttpClient client, string url, string[] files)
{
foreach (var file in files)
{
using (var stream = new FileStream(file, FileMode.Open))
{
FileInfo info = new FileInfo(file);
HttpContent fileStreamContent = new StreamContent(stream);
using (var content = new MultipartFormDataContent())
{
content.Add(fileStreamContent);
var response = await client.PostAsync(url, content);
response.EnsureSuccessStatusCode();
//code is stopping at the following line:
string finalresults = await response.Content.ReadAsStringAsync();
Console.WriteLine(finalresults);
Console.WriteLine(" > Uploaded file " + info.Name);
}
stream.Close();
}
}
Console.WriteLine("> Uploaded all files");
}
The Code is working fine for the very first file. But every other file is not uploaded. When I try to debug the code step by step, the code execution stops (in the second iteration of the loop) on this line:
string finalresults = await response.Content.ReadAsStringAsync();
Since the server log only shows on single request, I think that the error already occurs in this line:
var response = await client.PostAsync(url, content);
Even if I use different HttpClient objects and different FileStream objects, the upload is only working for the first file.
What is wrong with this code?
For your requiment, you can user third party libraries like RESTSharp. There are lots of examples and good documentation. Also it is easy to use.
My program uses HttpClient to send a GET request to a Web API, and this returns a file.
I now use this code (simplified) to store the file to disc:
public async Task<bool> DownloadFile()
{
var client = new HttpClient();
var uri = new Uri("http://somedomain.com/path");
var response = await client.GetAsync(uri);
if (response.IsSuccessStatusCode)
{
var fileName = response.Content.Headers.ContentDisposition.FileName;
using (var fs = new FileStream(#"C:\test\" + fileName, FileMode.Create, FileAccess.Write, FileShare.None))
{
await response.Content.CopyToAsync(fs);
return true;
}
}
return false;
}
Now, when this code runs, the process loads all of the file into memory. I actually would rather expect the stream gets streamed from the HttpResponseMessage.Content to the FileStream, so that only a small portion of it is held in memory.
We are planning to use that on large files (> 1GB), so is there a way to achieve that without having all of the file in memory?
Ideally without manually looping through reading a portion to a byte[] and writing that portion to the file stream until all of the content is written?
It looks like this is by-design - if you check the documentation for HttpClient.GetAsync() you'll see it says:
The returned task object will complete after the whole response
(including content) is read
You can instead use HttpClient.GetStreamAsync() which specifically states:
This method does not buffer the stream.
However you don't then get access to the headers in the response as far as I can see. Since that's presumably a requirement (as you're getting the file name from the headers), then you may want to use HttpWebRequest instead which allows you you to get the response details (headers etc.) without reading the whole response into memory. Something like:
public async Task<bool> DownloadFile()
{
var uri = new Uri("http://somedomain.com/path");
var request = WebRequest.CreateHttp(uri);
var response = await request.GetResponseAsync();
ContentDispositionHeaderValue contentDisposition;
var fileName = ContentDispositionHeaderValue.TryParse(response.Headers["Content-Disposition"], out contentDisposition)
? contentDisposition.FileName
: "noname.dat";
using (var fs = new FileStream(#"C:\test\" + fileName, FileMode.Create, FileAccess.Write, FileShare.None))
{
await response.GetResponseStream().CopyToAsync(fs);
}
return true
}
Note that if the request returns an unsuccessful response code an exception will be thrown, so you may wish to wrap in a try..catch and return false in this case as in your original example.
Instead of GetAsync(Uri) use the the GetAsync(Uri, HttpCompletionOption) overload with the HttpCompletionOption.ResponseHeadersRead value.
The same applies to SendAsync and other methods of HttpClient
Sources:
docs (see remarks)
https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpclient.getasync?view=netcore-1.1#System_Net_Http_HttpClient_GetAsync_System_Uri_System_Net_Http_HttpCompletionOption_
The returned Task object will complete based on the completionOption parameter after the part or all of the response (including content) is read.
.NET Core implementation of GetStreamAsync that uses HttpCompletionOption.ResponseHeadersRead https://github.com/dotnet/corefx/blob/release/1.1.0/src/System.Net.Http/src/System/Net/Http/HttpClient.cs#L163-L168
HttpClient spike in memory usage with large response
HttpClient.GetStreamAsync() with custom request? (don't mind the comment on response, the ResponseHeadersRead is what does the trick)
Another simple and quick way to do it is:
public async Task<bool> DownloadFile(string url)
{
using (MemoryStream ms = new MemoryStream()) {
new HttpClient().GetStreamAsync(webPath).Result.CopyTo(ms);
... // use ms in what you want
}
}
now you have the file downloaded as stream in ms.
The following code gets a Stream from a URI and will read in in chunks using a loop. Note that behind the specified URI is an online radio stream, which means there is no known size.
var uri = new Uri("http://*******", UriKind.Absolute);
var http = new HttpClient();
var stream = await http.GetStreamAsync(uri);
var buffer = new byte[65536];
while (true)
{
var read = await stream.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false);
Debug.WriteLine("Read: {0}", read);
}
Now while this works perfectly fine in a .NET 4.5 console app, this exactly same code does not work as expected in WinRT - it will read the first chunk, and when calling ReadAsync for the second time, it will just get stuck and never continue.
If I switch the URI to a file (of known size) everything works fine in both projects.
Any tips?
EDIT> note that this behaviour happens only on WP8.1. I just searched some more on SO and found that my question might be a duplicate of this one: WP8.1 HttpClient Stream got only 65536 bytes data If that is true, I will close my question
Use HttpClient.GetAsync() with HttpCompletionOption.ResponseHeadersRead. That returns when the headers are received, then do HttpResponse.Content.ReadAsInputStreamAsync().
Looks like your while loop is infinite. How are you ensuring that it's finite? Instead of while(true) make it something like this
var uri = new Uri("http://*******", UriKind.Absolute);
var http = new HttpClient();
var stream = http.GetStreamAsync(uri).Result;
using (var reader = new StreamReader(stream))
{
while (!reader.EndOfStream)
{
var response = reader.ReadToEnd();
}
}