I have a handler which works as it should to serve a download. This is the important code:
// Get size of file
FileInfo f = new FileInfo(Settings.ReleaseFileLocation + ActualFileName);
long FileSize = f.Length;
// Init (returns ID of tblDownloadLog record created with blank end date)
int DownloadRecordID = Constructor.VersionReleaseDownload.newReleaseDownload(ActualFileName);
context.Response.Clear();
context.Response.Buffer = false;
context.Response.ContentType = "application/octet-stream";
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + OriginalFileName);
context.Response.AddHeader("Content-Length", FileSize.ToString());
context.Response.TransmitFile(Settings.ReleaseFileLocation + ActualFileName);
context.Response.Close();
// Complete download log, fills out the end date
Constructor.VersionReleaseDownload.completeReleaseDownload(DownloadRecordID);
The context.Response.Close(); ensures the completeReleaseDownload() only runs when the download is complete which is very useful (re Only count a download once it's served)
Problem is, we're getting a lot of logs that come from the same IP address in about the same time spacing. After digging a bit deeper it appears they are users using Download Resumer software.
When I try to use a download resumer it seems to fail. My question is:
How do I detect this is a partial request
How can I serve the partial request
How can I make it work with the above code so it a) Calls https://www.scirra.com/downloads/releases/construct2-r68-setup_4.exe on the first partial get and b) Calls completeReleaseDownload on the last partial get?
This is achieved in Mime with an E-Tag, check out: http://www.devx.com/dotnet/Article/22533/1954
When you capture some packets sent using DownloadResumer, you will probably find the Range tag being specified.
Range: bytes=500-1000
This allows you to check if this is a partial request and if so, take action like:
bool isFirstRequest = RangeStart == 0;
bool isLastRequest = RangeEnd == file.TotalBytes - 1;//(Ranges use Zero-Based Indices)
Related
I have download code functionality in my ASP.NET project and the download code look like below.
public class Download : IHttpHandler
{
private void DownloadPsListingProduct(Guid which)
{
string path = GetFilePathFromGuid(which);
FileInfo file = new FileInfo(path);
HttpResponse response = HttpContext.Current.Response;
response.ClearContent();
response.Clear();
response.ContentType = "application/octet-stream";
response.AddHeader("Content-Disposition",
"attachment;filename=\"" + file.Name.NeutralizationCrlfSequences() + "\";");
response.TransmitFile(file.FullName);
response.Flush();
response.End();
}
public bool IsReusable
{
get
{
return false;
}
}
}
This code work like a charm when I download single file at a time.
But when one file is under process of downloading and I request to download other file then it first wait to completion of first file downloading and then second file download start.
Note: I am sending new request to download each file.
I want to avoid this single file download behavior and user should able to download files without waiting previous one to complete.
ASP.NET Web API 2 should be able to handle this with very little ceremony. There's an example here, but I'll re-iterate the important parts:
public class FilesController : ApiController
{
public IHttpActionResult(Guid fileId)
{
var filePath = GetFilePathFromGuid(fileId);
var fileName = Path.GetFileName(filePath);
var mimeType = MimeMapping.GetMimeMappting(fileName);
return OkFileDownloadResult(filePath, mimeType, fileName, this);
}
}
Of course, hooking up routing etc in ASP.NET Web API 2 is quite different from hooking up an IHttpHandler, but there's also a plethora of examples on the internet (including here on SO) on how to get started with that.
I have the code below which works well for small files but for large files it generates the zip as required but doesn't download it. I get all sorts of errors including Timeout (which I have managed to resolve). The other problem is that it runs in Sync. The largest file I have generated myself is a 330MB zip file with about 30 HD images attached to it. But this can even go to GBs as the user can choose to download about 100 or even more HD images at once.
To resolve both issues, I thought downloading in async may help in both cases. I want to alert the user that their download has started, and that they will be notified when it is ready.
I am thinking of sending the stream down if the client IsConnected (then delete the file) or sending an email to ask them to download the file if they have decided to logout (then delete the file using the offline download link). I just don't know where or how to write async code, or if what I want to do can actually be done if the user decides to logout.
Here's my current code:
private void DownloadFile(string filePath)
{
FileInfo myfile = new FileInfo(filePath);
// Checking if file exists
if (myfile.Exists)
{
// Clear the content of the response
Response.ClearContent();
// Add the file name and attachment, which will force the open/cancel/save dialog box to show, to the header
Response.AddHeader("Content-Disposition", "attachment; filename=" + myfile.Name);
// Add the file size into the response header
Response.AddHeader("Content-Length", myfile.Length.ToString());
// Set the ContentType
Response.ContentType = "application/octet-stream";
Response.TransmitFile(filePath);
Response.Flush();
try
{
myfile.Delete();
}
catch { }
}
}
I don't know about Async downloads from asp.net applications so I can't address that question. But I have run into enough download issues to always start from the same place.
First, download from a generic handle (ASHX) and not a web form. The webform wants to do extra processing at the end of the request that can cause problems. You question didn't state if you are using a web form or generic handler.
Second, always end the request with the ApplicationInstance.CompleteRequest() method call. Don't use Request.Close() or Request.End()
Those two changes have often cleaned up download issues for me. Try these change and see if you get the same results. Even if you do get the same results this is a better way of coding downloads.
Finally, as an aside, only catch appropriate exceptions in the try-catch bock.
Your code would be like this:
public class Handler1 : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
// set from QueryString
string filePath = "...";
FileInfo myfile = new FileInfo(filePath);
// Checking if file exists
if (myfile.Exists)
{
// Clear the content of the response
context.Response.ClearContent();
// Add the file name and attachment, which will force the open/cancel/save dialog box to show, to the header
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + myfile.Name);
// Add the file size into the response header
context.Response.AddHeader("Content-Length", myfile.Length.ToString());
// Set the ContentType
context.Response.ContentType = "application/octet-stream";
context.Response.TransmitFile(filePath);
context.Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
try
{
myfile.Delete();
}
catch (IOException)
{ }
}
}
public bool IsReusable
{
get
{
return false;
}
}
}
I have developed an application where i am getting data from database ,binding it to an Infragistics grid and then downloading excel using its export utility.
There is a problem with this approach when data set is large (say 20000 records or more), it 'll take long time to process and download, and usually it 'll die the page and show blank page to user.
Is there any better approach to handle this issue and make reasonable improvements in excel download process?
Code is like something below:
public void LoadExcelPostingData()
{
try
{
query = "Some complex query here with up to 10 columns";
dt.Clear();
dt = new DataTable();
db2.GetDataTable(query, CommandType.Text, ref dt);
grdJurdata.DataSource = dt;
grdJurdata.DataBind();
ExportToExcel();
}
catch (Exception ex)
{
lblresult.Text = "Grd Err : " + ex.Message;
}
}
private void ExportToExcel()
{
try
{
// Infragistics built in excel export utility
UltraWebGridExcelExporter2.Export(grdJurdata);
}
catch (Exception ex)
{ }
}
Regarding file download Microsoft's MSDN provide a detailed explanation
Get the response
With the response, Set the content type to "APPLICATION/OCTET-STREAM" (it means there's no application to open the file).
Set the header to "Content-Disposition", "attachment; filename=\"" + + "\"".
Write the file content into the response.
Close the response.
Also keep in mind that never use Ajax request to download file because for file transfer , It needs complete PostBack Request Here is the sample code given on MSDN
<%
try
{
System.String filename = "myFile.txt";
// set the http content type to "APPLICATION/OCTET-STREAM
Response.ContentType = "APPLICATION/OCTET-STREAM";
// initialize the http content-disposition header to
// indicate a file attachment with the default filename
// "myFile.txt"
System.String disHeader = "Attachment; Filename=\"" + filename +
"\"";
Response.AppendHeader("Content-Disposition", disHeader);
// transfer the file byte-by-byte to the response object
System.IO.FileInfo fileToDownload = new
System.IO.FileInfo("C:\\downloadJSP\\DownloadConv\\myFile.txt");
Response.Flush();
Response.WriteFile(fileToDownload.FullName);}
catch (System.Exception e)
// file IO errors
{
SupportClass.WriteStackTrace(e, Console.Error);
}
%>
I also suggest you to read this good discussion
Edit #1: Another solution for your case is to
Create a new page to hold the UltraWebGridExcelExporter, and in your main page, create a iframe tag to hold that new page. Let the iframe postback. And also upgrade your Infgraistics version to latest.
First you need to relook at the code how you wrote. you need to refactor or improve the codebase.
if you want to increase the request timeout, then you can do this
you need to add more timeout duration in web.config
<system.web>
<httpruntime executionTimeout="4800"/> //or higher values
</system.web>
Greatings!
I'm working on a reporting script which runs a number of reports (pdf) on button click. The reports are created on the web server then I'd like the user to be given the option to download the files. I have worked out the script for downloading one file from the server. But I'm not sure how to download multiple files? (there will probably be about 50)
After I run one report I redirect the user to a http handler script.
Response.Redirect("Download.ashx?ReportName=" + "WeeklySummary.pdf");
public class Download : IHttpHandler {
public void ProcessRequest(HttpContext context)
{
StringBuilder sbSavePath = new StringBuilder();
sbSavePath.Append(DateTime.Now.Day);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Month);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Year);
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.ContentType = "application/pdf";
HttpResponse objResponce = context.Response;
String test = HttpContext.Current.Request.QueryString["ReportName"];
HttpContext.Current.Response.AppendHeader("content-disposition", "attachment; filename=" + test);
objResponce.WriteFile(context.Server.MapPath(#"Reports\" + sbSavePath + #"\" + test));
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.End();
}
public bool IsReusable { get { return false; } }
}
Thanks in advance, please let me know if you'd like to see any more of my script.
The 2 options I see right away is the obvious one to simply call the HTTP Handler repeatedly. Another one would be to zip them on the server and send a zip file across the wire. You could use the built in GZipStream class to accomplish this.
Also, you'll want to add some code in your handler to clean up those temp files once they're downloaded.
In C# ASP.net, could someone show me how I can write entries from an Array/List to a CSV file on the server and then open the file? I think the second part would be something like - Response.Redirect("http://myserver.com/file.csv"), however not sure on how to write the file on the server.
Also if this page is accessed by many users, is it better to generate a new CSV file every time or overwrite the same file? Would there be any read/write/lock issues if both users try accessing the same CSV file etc.?
Update:
This is probably a silly question and I have searched on Google but I'm not able to find a definitive answer - how do you write a CSV file to the webserver and export it in C# ASP.net? I know how to generate it but I would like to save it to www.mysite.com/my.csv and then export it.
Rom, you're doing it wrong. You don't want to write files to disk so that IIS can serve them up. That adds security implications as well as increases complexity. All you really need to do is save the CSV directly to the response stream.
Here's the scenario: User wishes to download csv. User submits a form with details about the csv they want. You prepare the csv, then provide the user a URL to an aspx page which can be used to construct the csv file and write it to the response stream. The user clicks the link. The aspx page is blank; in the page codebehind you simply write the csv to the response stream and end it.
You can add the following to the (I believe this is correct) Load event:
string attachment = "attachment; filename=MyCsvLol.csv";
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ClearHeaders();
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.AddHeader("content-disposition", attachment);
HttpContext.Current.Response.ContentType = "text/csv";
HttpContext.Current.Response.AddHeader("Pragma", "public");
var sb = new StringBuilder();
foreach(var line in DataToExportToCSV)
sb.AppendLine(TransformDataLineIntoCsv(line));
HttpContext.Current.Response.Write(sb.ToString());
writing to the response stream code ganked from here.
Here's a very simple free open-source CsvExport class for C#. There's an ASP.NET MVC example at the bottom.
https://github.com/jitbit/CsvExport
It takes care about line-breaks, commas, escaping quotes, MS Excel compatibilty... Just add one short .cs file to your project and you're good to go.
(disclaimer: I'm one of the contributors)
Here is a CSV action result I wrote that takes a DataTable and converts it into CSV. You can return this from your view and it will prompt the user to download the file. You should be able to convert this easily into a List compatible form or even just put your list into a DataTable.
using System;
using System.Text;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Data;
namespace Detectent.Analyze.ActionResults
{
public class CSVResult : ActionResult
{
/// <summary>
/// Converts the columns and rows from a data table into an Microsoft Excel compatible CSV file.
/// </summary>
/// <param name="dataTable"></param>
/// <param name="fileName">The full file name including the extension.</param>
public CSVResult(DataTable dataTable, string fileName)
{
Table = dataTable;
FileName = fileName;
}
public string FileName { get; protected set; }
public DataTable Table { get; protected set; }
public override void ExecuteResult(ControllerContext context)
{
StringBuilder csv = new StringBuilder(10 * Table.Rows.Count * Table.Columns.Count);
for (int c = 0; c < Table.Columns.Count; c++)
{
if (c > 0)
csv.Append(",");
DataColumn dc = Table.Columns[c];
string columnTitleCleaned = CleanCSVString(dc.ColumnName);
csv.Append(columnTitleCleaned);
}
csv.Append(Environment.NewLine);
foreach (DataRow dr in Table.Rows)
{
StringBuilder csvRow = new StringBuilder();
for(int c = 0; c < Table.Columns.Count; c++)
{
if(c != 0)
csvRow.Append(",");
object columnValue = dr[c];
if (columnValue == null)
csvRow.Append("");
else
{
string columnStringValue = columnValue.ToString();
string cleanedColumnValue = CleanCSVString(columnStringValue);
if (columnValue.GetType() == typeof(string) && !columnStringValue.Contains(","))
{
cleanedColumnValue = "=" + cleanedColumnValue; // Prevents a number stored in a string from being shown as 8888E+24 in Excel. Example use is the AccountNum field in CI that looks like a number but is really a string.
}
csvRow.Append(cleanedColumnValue);
}
}
csv.AppendLine(csvRow.ToString());
}
HttpResponseBase response = context.HttpContext.Response;
response.ContentType = "text/csv";
response.AppendHeader("Content-Disposition", "attachment;filename=" + this.FileName);
response.Write(csv.ToString());
}
protected string CleanCSVString(string input)
{
string output = "\"" + input.Replace("\"", "\"\"").Replace("\r\n", " ").Replace("\r", " ").Replace("\n", "") + "\"";
return output;
}
}
}
A comment about Will's answer, you might want to replace HttpContext.Current.Response.End(); with HttpContext.Current.ApplicationInstance.CompleteRequest(); The reason is that Response.End() throws a System.Threading.ThreadAbortException. It aborts a thread. If you have an exception logger, it will be littered with ThreadAbortExceptions, which in this case is expected behavior.
Intuitively, sending a CSV file to the browser should not raise an exception.
See here for more Is Response.End() considered harmful?
How to write to a file (easy search in Google) ... 1st Search Result
As far as creation of the file each time a user accesses the page ... each access will act on it's own behalf. You business case will dictate the behavior.
Case 1 - same file but does not change (this type of case can have multiple ways of being defined)
You would have logic that created the file when needed and only access the file if generation is not needed.
Case 2 - each user needs to generate their own file
You would decide how you identify each user, create a file for each user and access the file they are supposed to see ... this can easily merge with Case 1. Then you delete the file after serving the content or not if it requires persistence.
Case 3 - same file but generation required for each access
Use Case 2, this will cause a generation each time but clean up once accessed.
check out csvreader/writer library at http://www.codeproject.com/KB/cs/CsvReaderAndWriter.aspx