I have download code functionality in my ASP.NET project and the download code look like below.
public class Download : IHttpHandler
{
private void DownloadPsListingProduct(Guid which)
{
string path = GetFilePathFromGuid(which);
FileInfo file = new FileInfo(path);
HttpResponse response = HttpContext.Current.Response;
response.ClearContent();
response.Clear();
response.ContentType = "application/octet-stream";
response.AddHeader("Content-Disposition",
"attachment;filename=\"" + file.Name.NeutralizationCrlfSequences() + "\";");
response.TransmitFile(file.FullName);
response.Flush();
response.End();
}
public bool IsReusable
{
get
{
return false;
}
}
}
This code work like a charm when I download single file at a time.
But when one file is under process of downloading and I request to download other file then it first wait to completion of first file downloading and then second file download start.
Note: I am sending new request to download each file.
I want to avoid this single file download behavior and user should able to download files without waiting previous one to complete.
ASP.NET Web API 2 should be able to handle this with very little ceremony. There's an example here, but I'll re-iterate the important parts:
public class FilesController : ApiController
{
public IHttpActionResult(Guid fileId)
{
var filePath = GetFilePathFromGuid(fileId);
var fileName = Path.GetFileName(filePath);
var mimeType = MimeMapping.GetMimeMappting(fileName);
return OkFileDownloadResult(filePath, mimeType, fileName, this);
}
}
Of course, hooking up routing etc in ASP.NET Web API 2 is quite different from hooking up an IHttpHandler, but there's also a plethora of examples on the internet (including here on SO) on how to get started with that.
Related
I have an application that is used by people inside and outside my organization. This application exports both Excel (.xlsx) and PDF files. I'm having trouble with the file exports. It works fine for people that are on my network, but people outside my network are getting a "File read error. File type is unsupported or the file is corrupted", and the file will only be 127 bytes instead of it's correct size (normally about 2 megabytes). I need people outside my network to be able to successfully download and open the files.
I've also tried running handler classes tailored to each specific file type, I've tried opening up the directory with the file to let "Everyone" have read access, I'm really not sure on how to fix this. The web server is running IIS 10.
public class fileExportHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
string fileToExport = "";
string fileName = "exportedFile";
string fileType = "";
System.Web.HttpRequest request = System.Web.HttpContext.Current.Request;
if ((request.QueryString["fileToExport"] != null))
{
fileToExport = request.QueryString["fileToExport"].ToString();
string[] fileParts = fileToExport.Split('.');
fileType = fileParts[1];
if ((request.QueryString["fileName"] != null))
{
fileName = request.QueryString["fileName"].ToString();
}
}
fileToExport = #"E:\Website\Cascade\" + fileToExport;
//send the file to the browser
System.Web.HttpResponse Response = System.Web.HttpContext.Current.Response;
Response.ClearHeaders();
Response.Clear();
Response.Buffer = true;
string contentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
if(fileType == "pdf")
contentType = "application/pdf";
Response.ContentType = contentType;
Response.AddHeader("content-disposition", "attachment; filename=" + fileName + "." + fileType);
Response.TransmitFile(fileToExport);
Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
}
public bool IsReusable
{
get
{
return false;
}
}
}
Got it figured out. It was a network issue, my network people had moved me to a new server, and people outside my organization were seeing the old server, but people inside were seeing the new version. I got them to get the people outside my organization to see the new server, and that resolved my issue.
I've been trying to get my ASP.NET MVC website to export some data as an Excel file. For hours I thought that NPOI was just producing garbage so I switched over to EPPlus. I tested it in LINQPad and it created a proper working XLSX file, so I moved the code over to the MVC app. AGAIN, I get corrupted files. By chance I happened to look at the temp directory and saw that the file created by EPPlus is 3.87KB and works perfectly, but the FileResult is returning a file that's 6.42KB, which is corrupted. Why is this happening? I read somewhere that it was the server GZip compression causing it, so I turned it off, and it had no effect. Someone, please help me, I'm going out of my mind... Here's my code.
[HttpGet]
public FileResult Excel(
CenturyLinkOrderExcelQueryModel query) {
var file = Manager.GetExcelFile(query); // FileInfo
return File(file.FullName, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", query.FileName);
}
As far as I'm concerned there's an issue with the FileResult and it's accompanying methods. I ended up "resolving" the issue by overriding the Response object:
[HttpGet]
public void Excel(
CenturyLinkOrderExcelQueryModel query) {
var file = Manager.GetExcelFile(query);
Response.Clear();
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("Content-Disposition", "attachment; filename=" + query.FileName);
Response.BinaryWrite(System.IO.File.ReadAllBytes(file.FullName));
Response.Flush();
Response.Close();
Response.End();
}
Try using the OpenRead from FileInfo to get a file stream and see if that works.
[HttpGet]
public FileResult Excel(CenturyLinkOrderExcelQueryModel query) {
var file = Manager.GetExcelFile(query); // FileInfo
var fileStream = file.OpenRead();
return File(fileStream, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", query.FileName);
}
I have the code below which works well for small files but for large files it generates the zip as required but doesn't download it. I get all sorts of errors including Timeout (which I have managed to resolve). The other problem is that it runs in Sync. The largest file I have generated myself is a 330MB zip file with about 30 HD images attached to it. But this can even go to GBs as the user can choose to download about 100 or even more HD images at once.
To resolve both issues, I thought downloading in async may help in both cases. I want to alert the user that their download has started, and that they will be notified when it is ready.
I am thinking of sending the stream down if the client IsConnected (then delete the file) or sending an email to ask them to download the file if they have decided to logout (then delete the file using the offline download link). I just don't know where or how to write async code, or if what I want to do can actually be done if the user decides to logout.
Here's my current code:
private void DownloadFile(string filePath)
{
FileInfo myfile = new FileInfo(filePath);
// Checking if file exists
if (myfile.Exists)
{
// Clear the content of the response
Response.ClearContent();
// Add the file name and attachment, which will force the open/cancel/save dialog box to show, to the header
Response.AddHeader("Content-Disposition", "attachment; filename=" + myfile.Name);
// Add the file size into the response header
Response.AddHeader("Content-Length", myfile.Length.ToString());
// Set the ContentType
Response.ContentType = "application/octet-stream";
Response.TransmitFile(filePath);
Response.Flush();
try
{
myfile.Delete();
}
catch { }
}
}
I don't know about Async downloads from asp.net applications so I can't address that question. But I have run into enough download issues to always start from the same place.
First, download from a generic handle (ASHX) and not a web form. The webform wants to do extra processing at the end of the request that can cause problems. You question didn't state if you are using a web form or generic handler.
Second, always end the request with the ApplicationInstance.CompleteRequest() method call. Don't use Request.Close() or Request.End()
Those two changes have often cleaned up download issues for me. Try these change and see if you get the same results. Even if you do get the same results this is a better way of coding downloads.
Finally, as an aside, only catch appropriate exceptions in the try-catch bock.
Your code would be like this:
public class Handler1 : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
// set from QueryString
string filePath = "...";
FileInfo myfile = new FileInfo(filePath);
// Checking if file exists
if (myfile.Exists)
{
// Clear the content of the response
context.Response.ClearContent();
// Add the file name and attachment, which will force the open/cancel/save dialog box to show, to the header
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + myfile.Name);
// Add the file size into the response header
context.Response.AddHeader("Content-Length", myfile.Length.ToString());
// Set the ContentType
context.Response.ContentType = "application/octet-stream";
context.Response.TransmitFile(filePath);
context.Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
try
{
myfile.Delete();
}
catch (IOException)
{ }
}
}
public bool IsReusable
{
get
{
return false;
}
}
}
I'm getting a file from a database in byte [] format and want user to see download dialog before Linq will take it from the database. It's in C# and ASP.NET.
Now, it's like this:
User choose a file, click on it.
In code I get id of file clicked and using Linq I'm downloading.
Then I send the file by Response.OutputStream.Write(content, 0,
content.Length);
Before a file is downloaded from the database user won't see any
download dialog.
What can I do if I want users to see the download dialog before file is downloaded?
Code:
Getting file by id:
public static byte[] getFile(Guid id)
{
var linqFile = from file in MyDB.Files
where file.IdPliku.Equals(id)
select new
{
Content = file.Content
};
return linqFile.ToList().FirstOrDefault().Content.ToArray();
}
Saving file:
public void SaveFile(Guid fileID, string filename, string mimeTypes)
{
try
{
byte[] content = FileService.getFile(fileID);
Response.ClearContent();
Response.ClearHeaders();
Response.ContentType = mimeTypes;
Response.AppendHeader("Accept-Ranges", "bytes");
Response.AppendHeader("Content-Range", string.Format("0-{0}/{1}", content.Length, content.Length));
Response.AppendHeader("Content-Length", content.Length.ToString());
Response.AppendHeader("Content-Encoding", "utf-8");
Response.AppendHeader("Content-Type", Response.ContentType);
Response.AppendHeader("Content-Disposition", "attachment; filename= " + HttpUtility.UrlEncode(filename));
Response.OutputStream.Write(content, 0, content.Length);
//Response.BinaryWrite(content);
Response.Flush();
}
finally
{
Response.Close();
}
}
You are my hope.
your issue is here:
byte[] content = FileService.getFile(fileID);
because in this line you allocate the whole file in the web server's RAM and put everything in there, all content of the file from the database; what happens later does not matter anymore because you have already downloaded from db to web server in this line!!!
I am having such Deja-vu because I am sure I have given exactly the same comment on a very same question few weeks ago. Can't find it now, search for something like this here in SO.
In fact the solution is to stream directly to the output stream of the Response avoiding your byte[] array allocation above, to get this your data layer should of course support it and if it does not you could add a method for this. You want to use SQL Server filestream or something similar.
Greatings!
I'm working on a reporting script which runs a number of reports (pdf) on button click. The reports are created on the web server then I'd like the user to be given the option to download the files. I have worked out the script for downloading one file from the server. But I'm not sure how to download multiple files? (there will probably be about 50)
After I run one report I redirect the user to a http handler script.
Response.Redirect("Download.ashx?ReportName=" + "WeeklySummary.pdf");
public class Download : IHttpHandler {
public void ProcessRequest(HttpContext context)
{
StringBuilder sbSavePath = new StringBuilder();
sbSavePath.Append(DateTime.Now.Day);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Month);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Year);
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.ContentType = "application/pdf";
HttpResponse objResponce = context.Response;
String test = HttpContext.Current.Request.QueryString["ReportName"];
HttpContext.Current.Response.AppendHeader("content-disposition", "attachment; filename=" + test);
objResponce.WriteFile(context.Server.MapPath(#"Reports\" + sbSavePath + #"\" + test));
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.End();
}
public bool IsReusable { get { return false; } }
}
Thanks in advance, please let me know if you'd like to see any more of my script.
The 2 options I see right away is the obvious one to simply call the HTTP Handler repeatedly. Another one would be to zip them on the server and send a zip file across the wire. You could use the built in GZipStream class to accomplish this.
Also, you'll want to add some code in your handler to clean up those temp files once they're downloaded.