I want to add a button that will download a dynamically generated CSV file.
I think I need to use FileStreamResult (or possibly FileContentResult) but I have been unable to find an example that shows how to do this.
I've seen examples that create a physical file, and then download that. But my ideal solution would write directly to the response stream, which would be far more efficient than creating a file or first building the string in memory.
Has anyone seen an example of dynamically generating a file for download in Razor Pages (not MVC)?
So here's what I came up with.
Markup:
<a class="btn btn-success" asp-page-handler="DownloadCsv">
Download CSV
</a>
Handler:
public IActionResult OnGetDownloadCsv()
{
using MemoryStream memoryStream = new MemoryStream();
using CsvWriter writer = new CsvWriter(memoryStream);
// Write to memoryStream using SoftCircuits.CsvParser
writer.Flush(); // This is important!
FileContentResult result = new FileContentResult(memoryStream.GetBuffer(), "text/csv")
{
FileDownloadName = "Filename.csv""
};
return result;
}
This code works but I wish it used memory more efficiently. As is, it writes the entire file contents to memory, and then copies that memory to the result. So a large file would exist twice in memory before anything is written to the response stream. I was curious about FileStreamResult but wasn't able to get that working.
If someone can improve on this, I'd gladly mark your answer as the accepted one.
UPDATE:
So I realized I can adapt the code above to use FileStreamResult by replacing the last block with this:
memoryStream.Seek(0, SeekOrigin.Begin);
FileStreamResult result = new FileStreamResult(memoryStream, "text/csv")
{
FileDownloadName = "Filename.csv"
};
return result;
This works almost the same except that, instead of calling memoryStream.GetBuffer() to copy all the bytes, it just passes the memory stream object. This is an improvement as I am not needlessly copying the bytes.
However, the downside is that I have to remove my two using statements or else I'll get an exception:
ObjectDisposedException: Cannot access a closed Stream.
Looks like it's a trade off between copying the bytes an extra time or not cleaning up my streams and CSV writer.
In the end, I'm able to prevent the CSV writer from closing the stream when it's disposed, and since MemoryStream does not have unmanaged resources there should be no harm in leaving it open.
Related
I have a PDF file stored in a database as a byte array.
I'm reading the PDF byte array from my database back into my application.
Now, I'm trying to display the PDF with the RadPdfViewer but it is not working.
Here is my code:
byte[] pdfAsByteArray= File.ReadAllBytes(#"C:\Users\Username\Desktop\Testfile.pdf");
//Save "pdfAsByteArray" into database
//...
//Load pdf from database into byte[] variable "pdfAsByteArray"
using (var memoryStream = new MemoryStream(pdfAsByteArray))
{
this.PdfViewer.DocumentSource = new PdfDocumentSource(memoryStream);
}
when I execute the application I just get an empty PdfViewer.
Question: How do I set the DocumentSource the right way?
Question: How do I dispose the stream? (note that using doesn't works)
Note: I wan't to avoid things like writing a temp file to disk
Edit:
I figured it out but I am not completely satisfied with this solution:
Not working:
using (var memoryStream = new MemoryStream(pdfAsByteArray))
{
this.PdfViewer.DocumentSource = new PdfDocumentSource(memoryStream);
}
Working:
var memoryStream = new MemoryStream(pdfAsByteArray);
this.PdfViewer.DocumentSource = new PdfDocumentSource(memoryStream);
I don't know how teleriks RadPdfViewer component works but I wan't to dispose the Stream.
From the Telerik documentation (particularly with regards to the "Caution" stating that this loading is done asynchronously), I believe this should work while still providing you a way to close the stream (not as cleanly as if you were able to use a using block, but still better than leaving it open):
//class variable
private MemoryStream _stream;
_stream = new MemoryStream(pdfAsByteArray);
var docSource = new PdfDocumentSource(memoryStream);
docSource.Loaded += (sender, args) => { if (_stream != null) _stream.Dispose();};
this.PdfViewer.DocumentSource = docSource;
I did this free-hand and don't have access to the Telerik API so the exact details of the Loaded event are not available to me.
EDIT
Here's the relevant details from documentation I found (emphasis mine):
The PdfDocumentSource loads the document asynchronously. If you want
to obtain a reference to the DocumentSource after you have imported a
document, you should use the Loaded event of the PdfDocumentSource
object to obtain the loaded document. This is also a convenient method
that can be used to close the stream if you are loading a PDF from a
stream.
You need to implement PdfDocumentSource Loaded event. This is when the stream gets loaded and used up, and can be closed / disposed at that time.
Another method I've used is:
this.PdfViewer.PdfjsProcessingSettings.FileSettings.Data = Convert.ToBase64String(File.ReadAllBytes(#"C:\Users\Username\Desktop\Testfile.pdf"));
In an attempt to create a non-buffered file upload I have extended System.Web.Http.WebHost.WebHostBufferPolicySelector, overriding function UseBufferedInputStream() as described in this article: http://www.strathweb.com/2012/09/dealing-with-large-files-in-asp-net-web-api/. When a file is POSTed to my controller, I can see in trace output that the overridden function UseBufferedInputStream() is definitely returning FALSE as expected. However, using diagnostic tools I can see the memory growing as the file is being uploaded.
The heavy memory usage appears to be occurring in my custom MediaTypeFormatter (something like the FileMediaFormatter here: http://lonetechie.com/). It is in this formatter that I would like to incrementally write the incoming file to disk, but I also need to parse json and do some other operations with the Content-Type:multipart/form-data upload. Therefore I'm using HttpContent method ReadAsMultiPartAsync(), which appears to be the source of the memory growth. I have placed trace output before/after the "await", and it appears that while the task is blocking the memory usage is increasing fairly rapidly.
Once I find the file content in the parts returned by ReadAsMultiPartAsync(), I am using Stream.CopyTo() in order to write the file contents to disk. This writes to disk as expected, but unfortunately the source file is already in memory by this point.
Does anyone have any thoughts about what might be going wrong? It seems that ReadAsMultiPartAsync() is buffering the whole post data; if that is true why do we require var fileStream = await fileContent.ReadAsStreamAsync() to get the file contents? Is there another way to accomplish the splitting of the parts without reading them into memory? The code in my MediaTypeFormatter looks something like this:
// save the stream so we can seek/read again later
Stream stream = await content.ReadAsStreamAsync();
var parts = await content.ReadAsMultipartAsync(); // <- memory usage grows rapidly
if (!content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
//
// pull data out of parts.Contents, process json, etc.
//
// find the file data in the multipart contents
var fileContent = parts.Contents.FirstOrDefault(
x => x.Headers.ContentDisposition.DispositionType.ToLower().Trim() == "form-data" &&
x.Headers.ContentDisposition.Name.ToLower().Trim() == "\"" + DATA_CONTENT_DISPOSITION_NAME_FILE_CONTENTS + "\"");
// write the file to disk
using (var fileStream = await fileContent.ReadAsStreamAsync())
{
using (FileStream toDisk = File.OpenWrite("myUploadedFile.bin"))
{
((Stream)fileStream).CopyTo(toDisk);
}
}
WebHostBufferPolicySelector only specifies if the underlying request is bufferless. This is what Web API will do under the hood:
IHostBufferPolicySelector policySelector = _bufferPolicySelector.Value;
bool isInputBuffered = policySelector == null ? true : policySelector.UseBufferedInputStream(httpContextBase);
Stream inputStream = isInputBuffered
? requestBase.InputStream
: httpContextBase.ApplicationInstance.Request.GetBufferlessInputStream();
So if your implementation returns false, then the request is bufferless.
However, ReadAsMultipartAsync() loads everything into MemoryStream - because if you don't specify a provider, it defaults to MultipartMemoryStreamProvider.
To get the files to save automatically to disk as every part is processed use MultipartFormDataStreamProvider (if you deal with files and form data) or MultipartFileStreamProvider (if you deal with just files).
There is an example on asp.net or here. In these examples everything happens in controllers, but there is no reason why you wouldn't use it in i.e. a formatter.
Another option, if you really want to play with streams is to implement a custom class inheritng from MultipartStreamProvider that would fire whatever processing you want as soon as it grabs part of the stream. The usage would be similar to the aforementioned providers - you'd need to pass it to the ReadAsMultipartAsync(provider) method.
Finally - if you are feeling suicidal - since the underlying request stream is bufferless theoretically you could use something like this in your controller or formatter:
Stream stream = HttpContext.Current.Request.GetBufferlessInputStream();
byte[] b = new byte[32*1024];
while ((n = stream.Read(b, 0, b.Length)) > 0)
{
//do stuff with stream bit
}
But of course that's very, for the lack of better word, "ghetto."
I am having issue with deleting file created just to send an email with attachment and then view it in browser. now i need to delete this file as this is created to just send email. how can i do this.
here is what i have got so far.
public void SendEmail()
{
EmailClient.Send(mailMessage);
//View PDF Certificate in Browser
ViewPDFinBrowser((string)fileObject);
DeleteGeneratedTempCertificateFile((string)fileObject));
}
public void ViewPDFinBrowser(string filePath)
{
PdfReader reader = new PdfReader(filePath);
MemoryStream ms = new MemoryStream();
PdfStamper stamper = new PdfStamper(reader, ms);
stamper.ViewerPreferences = PdfWriter.PageLayoutSinglePage | PdfWriter.PageModeUseThumbs;
stamper.Close();
Response.Clear();
Response.ContentType = "application/pdf";
Response.OutputStream.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length);
Response.OutputStream.Close();
HttpContext.Current.ApplicationInstance.CompleteRequest();
}
public static void DeleteGeneratedTempCertificateFile(Object fileObject)
{
string filePath = (string)fileObject;
if (File.Exists(filePath))
{
File.Delete(filePath);
}
}
So here are the steps i need when i call SendEmail()
1) Sends an email with the attachment --> Temp file created
2) view the temp file in the browser
3) delete the temp file
I can understand that as long as file is in response object, i can not do anything with it because i get the error message ("File used by another process). If i close the response stream then file will be deleted but then i cant view it in browser.
i was thinking if i can manage to somehow open the file to view in browser in new window on button click, i will be able to delete the file.
OR
i am thinking i can delete the file after 10 min. as user wont be on site viewing the PDF for more then 1-2 mins.
please advice me one of the solution with example code.
appreciate your time and help.
As others have said, it's better to use the MemoryStream as-is without writing temporary files to the disk. Sometimes implementations of 3rd party components just won't allow this and in such cases after writing the binary contents of the PDF file, be sure to call close (and/or possibly dispose, always check MSDN or 3rd party API docs what the .Close() actually does) to all streams that are no longer needed. In your case close ms and reader after completing the http request.
In most cases, consider implementing the using pattern. See http://msdn.microsoft.com/en-us/library/aa664736.aspx for more details. However remember that there are caveats to this approach too, for example in WCF clients which can cause exceptions within (and thus not actually disposing all contents inside the using clause).
Also, keep in mind any concurrency issues. Keep the temporary file name random enough and consider situations where the file already exists on the local disk (i.e. fail the operation and do not send out binary to the request which the requester is not supposed to see etc).
In an MVC project, I have an ActionLink, which, when clicked, should take an arbitrary number of objects associated with the user (supplied as a List), dynamically build a CSV file of the objects' attributes and prompt a download. The file shouldn't persist on the server, so it either needs to be removed after download or the download needs to come from a stream or similar. What's the neatest way of going about this? I've seen examples of this which manually compile a CSV string and use HttpResponse.Write(String) within the controller, but is this best practice?
I have a function that is similar to this. I'm sure there is a "better" way to automatically find each of the members of the User object you pass it, but this way works.
public ActionResult ExportCSV(List<User> input)
{
using (MemoryStream output = new MemoryStream())
{
using (StreamWriter writer = new StreamWriter(output, Encoding.UTF8))
{
foreach (User user in input)
{
writer.Write(user.firstattribute);
writer.Write(",");
writer.Write(user.secondattribute);
writer.Write(",");
writer.Write(user.thirdattribute);
writer.Write(",");
writer.Write(user.lastattribute);
writer.WriteLine();
}
writer.Flush();
}
output.Position = 0;
return Controller.File(output, "text/comma-separated-values", "report.csv");
}
}
If you have the stream, returning a FileStreamResult is probably the "cleanest" way to do it in ASP.NET MVC.
You should manually assemble a string, then return Content(str, "text/csv");
LINQ to CSV is reportedly a nice library for generating CSV content. Once you generate your CSV, use the File method of Controller to return a FileStreamResult from your action; that lets you send any arbitrary stream as a response, whether it's a FileStreamResult or any other type of Stream.
I've produced a MVC app that when you access /App/export it zips up all the files in a particular folder and then returns the zip file. The code looks something like:
public ActionResult Export() {
exporter = new Project.Exporter("/mypath/")
return File(exporter.filePath, "application/zip", exporter.fileName);
}
What I would like to do is return the file to the user and then delete it. Is there any way to set a timeout to delete the file? or hold onto the file handle so the file isn't deleted till after the request is finished?
Sorry, I do not have the code right now...
But the idea here is: just avoid creating a temporary file! You may write the zipped data directly to the response, using a MemoryStream for that.
EDIT Something on that line (it's not using MemoryStream but the idea is the same, avoiding creating a temp file, here using the DotNetZip library):
DotNetZip now can save directly to ASP.NET Response.OutputStream.
I know this thread is too old , but here is a solution if someone still faces this.
create temp file normally.
Read file into bytes array in memory by System.IO.File.ReadAllBytes().
Delete file from desk.
Return the file bytes by File(byte[] ,"application/zip" ,"SomeNAme.zip") , this is from your controller.
Code Sample here:
//Load ZipFile
var toDownload = System.IO.File.ReadAllBytes(zipFile);
//Clean Files
Directory.Delete(tmpFolder, true);
System.IO.File.Delete(zipFile);
//Return result for download
return File(toDownload,"application/zip",$"Certificates_{rs}.zip");
You could create a Stream implementation similar to FileStream, but which deletes the file when it is disposed.
There's some good code in this SO post.