I have Web API to upload a single file which is sent along the request body. I have the below code to read file binaries from the stream
Task<HttpResponseMessage> task = Request.Content.ReadAsStreamAsync().ContinueWith<HttpResponseMessage>(t =>
{
if (t.IsFaulted || t.IsCanceled)
throw new HttpResponseException(HttpStatusCode.InternalServerError);
try
{
using (Stream stream = t.Result)
{
using (MemoryStream ms = new MemoryStream())
{
stream.Seek(0, SeekOrigin.Begin);
stream.CopyTo(ms);
byte[] fileBinaries = ms.ToArray();
//logic to process the file
}
}
}
catch (Exception e)
{
//exception handling logic
}
return Request.CreateResponse(HttpStatusCode.Created);
});
return task;
The API works fine when called with file being uploaded; and returns Http status code 201. But if I didn't attached file to the API call still it returns the same code as there is no check on binary data received. I want to add that check to return appropriate error message to user.
I tried to perform this check by evaluating the length of fileBinaries byte array read from Request.Content. But the array has few bytes in it which represents text [object FileList] (don't know how this bytes are filled in the array as i haven't attached any file with the API call). So this won't work for me.
I also tried using the HttpContext.Current.Request.Files.Count but it always returns 0 (Probably due to the file binaries sent in request body) so not suitable for my check.
I can't rely on any headers like File Name as those are not sent in request.
Any help how to perform this?
Try using MultipartMemoryStreamProvider which is ideal for file uploads using webapi
public async Task<IHttpActionResult> UploadFile()
{
var filesReadToProvider = await Request.Content.ReadAsMultipartAsync();
foreach (var stream in filesReadToProvider.Contents)
{
var fileBytes = await stream.ReadAsByteArrayAsync();
}
}
Here fileBytes may not have those 17 bytes.
Related
I have a very weird behaviour in a legacy repository that I cloned and try to work on.
There is an action for downloading files (files are hosted on AWS):
(redundant code omitted for clarity).
public virtual async Task<ActionResult> Entity(EntityModel entity)
{
using (Stream stream = downloadFileObject(model.Updates[GetUpdateNumberFromQueryString()].UpdateLink))
using (MemoryStream ms = new MemoryStream())
{
await stream.CopyToAsync(ms).ConfigureAwait(false);
return File(ms.ToArray(), "application/octet-stream", "The file.exe");
}
}
The DownloadFileObject method connects to AWS, creates a request and returns the response stream:
public Stream GetS3FileObject(string objectKey, string fileName)
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = objectKey,
};
GetObjectResponse response = s3Client.GetObject(request);
return response.ResponseStream;
}
This returns a proper stream with no errors, I can see that the lenght indicates something around 35 mb, which is the proper size of the file I am getting.
Now, I tried several approaches for returning this stream as downloadable file, and they all result in something very weird.
Instead of getting the file, the browser retrieves a 110kb text file which pretty much looks like my current view html...
The returned HTML contains the following error message (apart from the regular site content)
Error executing child request for handler
'System.Web.Mvc.HttpHandlerUtil+ServerExecuteHttpHandlerAsyncWrapper'.
OutputStream is not available when a custom TextWriter is used.
I tried several approaches, where the first and most obvious was just:
var stream = DownloadFileObject(model.Updates[GetUpdateNumberFromQueryString()].UpdateLink);
return File(stream, "application/octet-stream", "The file.exe");
Then I tried to copy that stream to a byte array and return that - however, in all cases the result is the same.
Up to the last moment, the debugger shows that my stream/byte array is 35mb large.
Here are the headers:
Also, the rendered Download button html is as below:
Download update
And the button is being created in the view without any HTML helpers (Action.Link() etc)
#Model.SiteLabelDownloadUpdate
Any ideas what can be going on?
I am trying to return a PDF file from my ASP.NET Core 2 controller.
I have this code
(mostly borrowed from this SO question):
var net = new System.Net.WebClient();
//a random pdf file link
var fileLocation = "https://syntera.io/documents/T&C.pdf";/
var data = net.DownloadData(fileLocation);
MemoryStream content = null;
try
{
content = new MemoryStream(data);
return new FileStreamResult(content, "Application/octet-stream");
}
finally
{
content?.Dispose();
}
This code above is part of a service class that my controller calls. This is the code from my controller.
public async Task<IActionResult> DownloadFile(string fileName)
{
var result = await _downloader.DownloadFileAsync(fileName);
return result;
}
But I keep getting ObjectDisposedException: Cannot access a closed Stream.
The try and finally block was an attempt to fix it , from another SO question .
The main question is A) Is this the right way to send a PDF file back to the browser and B) if it isn't, how can I change the code to send the pdf to the browser?
Ideally , I don't want to first save the file on the server and then return it to the controller. I'd rather return it while keeping everything in memory.
The finally will always get called (even after the return) so it will always dispose of the content stream before it can be sent to the client, hence the error.
Ideally , I don't want to first save the file on the server and then return it to the controller. I'd rather return it while keeping everything in memory.
Use a FileContentResult class to take the raw byte array data and return it directly.
FileContentResult: Represents an ActionResult that when executed will write a binary file to the response.
async Task<IActionResult> DownloadFileAsync(string fileName){
using(var net = new System.Net.WebClient()) {
byte[] data = await net.DownloadDataTaskAsync(fileName);
return new FileContentResult(data, "application/pdf") {
FileDownloadName = "file_name_here.pdf"
};
}
}
No need for the additional memory stream
You must specify :
Response.AppendHeader("content-disposition", "inline; filename=file.pdf");
return new FileStreamResult(stream, "application/pdf")
For the file to be opened directly in the browser.
My program uses HttpClient to send a GET request to a Web API, and this returns a file.
I now use this code (simplified) to store the file to disc:
public async Task<bool> DownloadFile()
{
var client = new HttpClient();
var uri = new Uri("http://somedomain.com/path");
var response = await client.GetAsync(uri);
if (response.IsSuccessStatusCode)
{
var fileName = response.Content.Headers.ContentDisposition.FileName;
using (var fs = new FileStream(#"C:\test\" + fileName, FileMode.Create, FileAccess.Write, FileShare.None))
{
await response.Content.CopyToAsync(fs);
return true;
}
}
return false;
}
Now, when this code runs, the process loads all of the file into memory. I actually would rather expect the stream gets streamed from the HttpResponseMessage.Content to the FileStream, so that only a small portion of it is held in memory.
We are planning to use that on large files (> 1GB), so is there a way to achieve that without having all of the file in memory?
Ideally without manually looping through reading a portion to a byte[] and writing that portion to the file stream until all of the content is written?
It looks like this is by-design - if you check the documentation for HttpClient.GetAsync() you'll see it says:
The returned task object will complete after the whole response
(including content) is read
You can instead use HttpClient.GetStreamAsync() which specifically states:
This method does not buffer the stream.
However you don't then get access to the headers in the response as far as I can see. Since that's presumably a requirement (as you're getting the file name from the headers), then you may want to use HttpWebRequest instead which allows you you to get the response details (headers etc.) without reading the whole response into memory. Something like:
public async Task<bool> DownloadFile()
{
var uri = new Uri("http://somedomain.com/path");
var request = WebRequest.CreateHttp(uri);
var response = await request.GetResponseAsync();
ContentDispositionHeaderValue contentDisposition;
var fileName = ContentDispositionHeaderValue.TryParse(response.Headers["Content-Disposition"], out contentDisposition)
? contentDisposition.FileName
: "noname.dat";
using (var fs = new FileStream(#"C:\test\" + fileName, FileMode.Create, FileAccess.Write, FileShare.None))
{
await response.GetResponseStream().CopyToAsync(fs);
}
return true
}
Note that if the request returns an unsuccessful response code an exception will be thrown, so you may wish to wrap in a try..catch and return false in this case as in your original example.
Instead of GetAsync(Uri) use the the GetAsync(Uri, HttpCompletionOption) overload with the HttpCompletionOption.ResponseHeadersRead value.
The same applies to SendAsync and other methods of HttpClient
Sources:
docs (see remarks)
https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpclient.getasync?view=netcore-1.1#System_Net_Http_HttpClient_GetAsync_System_Uri_System_Net_Http_HttpCompletionOption_
The returned Task object will complete based on the completionOption parameter after the part or all of the response (including content) is read.
.NET Core implementation of GetStreamAsync that uses HttpCompletionOption.ResponseHeadersRead https://github.com/dotnet/corefx/blob/release/1.1.0/src/System.Net.Http/src/System/Net/Http/HttpClient.cs#L163-L168
HttpClient spike in memory usage with large response
HttpClient.GetStreamAsync() with custom request? (don't mind the comment on response, the ResponseHeadersRead is what does the trick)
Another simple and quick way to do it is:
public async Task<bool> DownloadFile(string url)
{
using (MemoryStream ms = new MemoryStream()) {
new HttpClient().GetStreamAsync(webPath).Result.CopyTo(ms);
... // use ms in what you want
}
}
now you have the file downloaded as stream in ms.
I need to upload a file using Stream (Azure Blobstorage), and just cannot find out how to get the stream from the object itself. See code below.
I'm new to the WebAPI and have used some examples. I'm getting the files and filedata, but it's not correct type for my methods to upload it. Therefore, I need to get or convert it into a normal Stream, which seems a bit hard at the moment :)
I know I need to use ReadAsStreamAsync().Result in some way, but it crashes in the foreach loop since I'm getting two provider.Contents (first one seems right, second one does not).
[System.Web.Http.HttpPost]
public async Task<HttpResponseMessage> Upload()
{
if (!Request.Content.IsMimeMultipartContent())
{
this.Request.CreateResponse(HttpStatusCode.UnsupportedMediaType);
}
var provider = GetMultipartProvider();
var result = await Request.Content.ReadAsMultipartAsync(provider);
// On upload, files are given a generic name like "BodyPart_26d6abe1-3ae1-416a-9429-b35f15e6e5d5"
// so this is how you can get the original file name
var originalFileName = GetDeserializedFileName(result.FileData.First());
// uploadedFileInfo object will give you some additional stuff like file length,
// creation time, directory name, a few filesystem methods etc..
var uploadedFileInfo = new FileInfo(result.FileData.First().LocalFileName);
// Remove this line as well as GetFormData method if you're not
// sending any form data with your upload request
var fileUploadObj = GetFormData<UploadDataModel>(result);
Stream filestream = null;
using (Stream stream = new MemoryStream())
{
foreach (HttpContent content in provider.Contents)
{
BinaryFormatter bFormatter = new BinaryFormatter();
bFormatter.Serialize(stream, content.ReadAsStreamAsync().Result);
stream.Position = 0;
filestream = stream;
}
}
var storage = new StorageServices();
storage.UploadBlob(filestream, originalFileName);**strong text**
private MultipartFormDataStreamProvider GetMultipartProvider()
{
var uploadFolder = "~/App_Data/Tmp/FileUploads"; // you could put this to web.config
var root = HttpContext.Current.Server.MapPath(uploadFolder);
Directory.CreateDirectory(root);
return new MultipartFormDataStreamProvider(root);
}
This is identical to a dilemma I had a few months ago (capturing the upload stream before the MultipartStreamProvider took over and auto-magically saved the stream to a file). The recommendation was to inherit that class and override the methods ... but that didn't work in my case. :( (I wanted the functionality of both the MultipartFileStreamProvider and MultipartFormDataStreamProvider rolled into one MultipartStreamProvider, without the autosave part).
This might help; here's one written by one of the Web API developers, and this from the same developer.
Hi just wanted to post my answer so if anybody encounters the same issue they can find a solution here itself.
here
MultipartMemoryStreamProvider stream = await this.Request.Content.ReadAsMultipartAsync();
foreach (var st in stream.Contents)
{
var fileBytes = await st.ReadAsByteArrayAsync();
string base64 = Convert.ToBase64String(fileBytes);
var contentHeader = st.Headers;
string filename = contentHeader.ContentDisposition.FileName.Replace("\"", "");
string filetype = contentHeader.ContentType.MediaType;
}
I used MultipartMemoryStreamProvider and got all the details like filename and filetype from the header of content.
Hope this helps someone.
I'm trying to download files from my FTP server - multiples at the same time. When i use the DownloadFileAsync .. random files are returned with a byte[] Length of 0. I can 100% confirm the file exists on the server and has content AND there FTP server (running Filezilla Server) isn't erroring and say's the file has been transferred.
private async Task<IList<FtpDataResult>> DownloadFileAsync(FtpFileName ftpFileName)
{
var address = new Uri(string.Format("ftp://{0}{1}", _server, ftpFileName.FullName));
var webClient = new WebClient
{
Credentials = new NetworkCredential(_username, _password)
};
var bytes = await webClient.DownloadDataTaskAsync(address);
using (var stream = new MemoryStream(bytes))
{
// extract the stream data (either files in a zip OR a file);
return result;
}
}
When I try this code, it's slower (of course) but all the files have content.
private async Task<IList<FtpDataResult>> DownloadFileAsync(FtpFileName ftpFileName)
{
var address = new Uri(string.Format("ftp://{0}{1}", _server, ftpFileName.FullName));
var webClient = new WebClient
{
Credentials = new NetworkCredential(_username, _password)
};
// NOTICE: I've removed the AWAIT and a different method.
var bytes = webClient.DownloadData(address);
using (var stream = new MemoryStream(bytes))
{
// extract the stream data (either files in a zip OR a file);
return result;
}
}
Can anyone see what I'm doing wrong, please? Why would the DownloadFileAsync be randomly returning zero bytes?
Try out FtpWebRequest/FtpWebResponse classes. You have more available to you for debugging purposes.
FtpWebRequest - http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest(v=vs.110).aspx
FtpWebResponse - http://msdn.microsoft.com/en-us/library/system.net.ftpwebresponse(v=vs.110).aspx
Take a look at http://netftp.codeplex.com/. It appears as though almost all methods implement IAsyncResult. There isn't much documentation on how to get started, but I would assume that it is similar to the synchronous FTP classes from the .NET framework. You can install the nuget package here: https://www.nuget.org/packages/System.Net.FtpClient/