Automated filedownload using WebBrowser without url - c#

I've been working on a WebCrawler written in C# using System.Windows.Forms.WebBrowser. I am trying to download a file off a website and save it on a local machine. More importantly, I would like this to be fully automated. The file download can be started by clicking a button that calls a javascript function that sparks the download displaying a “Do you want to open or save this file?” dialog. I definitely do not want to be manually clicking “Save as”, and typing in the file name.
I am aware of HttpWebRequest and WebClient’s download functions, but since the download is started with a javascript, I do now know the URL of the file. Fyi, the javascript is a doPostBack function that changes some values and submits a form.
I’ve tried getting focus on the save as dialog from WebBrowser to automate it from in there without much success. I know there’s a way to force the download to save instead of asking to save or open by adding a header to the http request, but I don’t know how to specify the filepath to download to.

I think you should prevent the download dialog from even showing. Here might be a way to do that:
The Javascript code causes your WebBrowser control to navigate to a specific Url (what would cause the download dialog to appear)
To prevent the WebBrowser control from actually Navigating to this Url, attach a event handler to the Navigating event.
In your Navigating event you'd have to analyze if this is the actual Navigation action you'd want to stop (is this one the download url, perhaps check for a file extension, there must be a recognizable format). Use the WebBrowserNavigatingEventArgs.Url to do so.
If this is the right Url, stop the Navigation by setting the WebBrowserNavigatingEventArgs.Cancel property.
Continue the download yourself with the HttpWebRequest or WebClient classes
Have a look at this page for more info on the event:
http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser.navigating.aspx

A similar solution is available at
http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/d338a2c8-96df-4cb0-b8be-c5fbdd7c9202/?prof=required
This work perfectly if there is direct URL including downloading file-name.
But sometime some URL generate file dynamically. So URL don't have file name but after requesting that URL some website create file dynamically and then open/save dialog comes.
for example some link generate pdf file on the fly.
How to handle such type of URL?

Take a look at Erika Chinchio article on http://www.codeproject.com/Tips/659004/Download-of-file-with-open-save-dialog-box
I have successfully used it for downloading dynamically generated pdf urls.

Assuming the System.Windows.Forms.WebBrowswer was used to access a protected page with a protected link that you want to download:
This code retrieves the actual link you want to download using the web browser. This code will need to be changed for your specific action. The important part is this a field documentLinkUrl that will be used below.
var documentLinkUrl = default(Uri);
browser.DocumentCompleted += (object sender, WebBrowserDocumentCompletedEventArgs e) =>
{
var aspForm = browser.Document.Forms[0];
var downloadLink = browser.Document.ActiveElement
.GetElementsByTagName("a").OfType<HtmlElement>()
.Where(atag =>
atag.GetAttribute("href").Contains("DownloadAttachment.aspx"))
.First();
var documentLinkString = downloadLink.GetAttribute("href");
documentLinkUrl = new Uri(documentLinkString);
}
browser.Navigate(yourProtectedPage);
Now that the protected page has been navigated to by the web browser and the download link has been acquired, This code downloads the link.
private static async Task DownloadLinkAsync(Uri documentLinkUrl)
{
var cookieString = GetGlobalCookies(documentLinkUrl.AbsoluteUri);
var cookieContainer = new CookieContainer();
using (var handler = new HttpClientHandler() { CookieContainer = cookieContainer })
using (var client = new HttpClient(handler) { BaseAddress = documentLinkUrl })
{
cookieContainer.SetCookies(this.documentLinkUrl, cookieString);
var response = await client.GetAsync(documentLinkUrl);
if (response.IsSuccessStatusCode)
{
var responseAsString = await response.Content.ReadAsStreamAsync();
// Response can be saved from Stream
}
}
}
The code above relies on the GetGlobalCookies method from Erika Chinchio which can be found in the excellent article provided by #Pedro Leonardo (available here),
[System.Runtime.InteropServices.DllImport("wininet.dll", CharSet = System.Runtime.InteropServices.CharSet.Auto, SetLastError = true)]
static extern bool InternetGetCookieEx(string pchURL, string pchCookieName,
System.Text.StringBuilder pchCookieData, ref uint pcchCookieData, int dwFlags, IntPtr lpReserved);
const int INTERNET_COOKIE_HTTPONLY = 0x00002000;
private string GetGlobalCookies(string uri)
{
uint uiDataSize = 2048;
var sbCookieData = new System.Text.StringBuilder((int)uiDataSize);
if (InternetGetCookieEx(uri, null, sbCookieData, ref uiDataSize,
INTERNET_COOKIE_HTTPONLY, IntPtr.Zero)
&&
sbCookieData.Length > 0)
{
return sbCookieData.ToString().Replace(";", ",");
}
return null;
}

Related

URI Access to a CSV Report

I am new to C# and am very green! I have a C# Application that i would like to download a Report from a Reserved.ReportViewerWebControl.axd and save it to a specific location, I found this code
var theURL = "http://TEST/TEST/Pages/TEST.aspx?&FileName=TEST&rs:Command=GetResourceContents";
WebClient Client = new WebClient
{
UseDefaultCredentials = true
};
byte[] myDataBuffer = Client.DownloadData(theURL);
var filename = "test.csv";
var fileStructureLocal = #"C:\Users\%UserName%\TEST\Downloads".Replace("%UserName%", UserName);
var fileStructureNetwork = "\\\\TEST\\TEST\\TEST\\TEST";
var fileLocation = fileStructureLocal + "\\" + filename;
if (System.IO.File.Exists(fileLocation) == true)
{
//DO NOTHING
}
else
{
System.IO.File.WriteAllBytes(fileLocation, myDataBuffer);
//File.WriteAllBytes("c:\\temp\\report.pdf", myDataBuffer);
//SAVE FILE HERE
}
it works but i get the Source Code and not the CSV file. I know the URL i get when i execute the reports in the normal browser has a session ID and a Control ID. I can copy that URL and put it at "theURL" and i get a 500 internal server error. I know i am all mixed up not sure what i need to do, but am trying many things. This was the closest i got...lol Sad i know. This is the URL I get when i execute it in the Browser.
http://test/test/Reserved.ReportViewerWebControl.axd?%2fReportSession=brhxbx55ngxdhp3zvk5bjmv3&Culture=1033&CultureOverrides=True&UICulture=1033&UICultureOverrides=True&ReportStack=1&ControlID=fa0acf3c777540c5b389d67737b1f866&OpType=Export&FileName=test&ContentDisposition=OnlyHtmlInline&Format=CSV
How would i get this to download the file from a button click in my App and save it in my location.
Your target web page uses a SSRS ReportViewer control to manage the rendering of the reports, this control relies heavily on ASP.Net Session State to render the report in the background via calls to the Reserved.ReportViewerWebControl.axd resource handler.
This means that to use this axd link that you have identified you must first trigger the content to be created and cached within the session context before it can be downloaded, and then you must download it from the same context.
We can't just run the page once and figure out the URL, we have to find a way to do this programatically using the same session between requests.
The ReportViewer control does this via javascript when the download button is clicked, which means there is no simple link to Reserved.ReportViewerWebControl.axd to scrape from the html.
This means we have to execute the same script manually or simulate the user clicking the link.
This solution will go into some screen-scraping techniques (UX Automation) to simulate clicking the export button and capturing the result but I would avoid this if you can.
You really should attempt to contact the developer directly for guidance, they may have implemented some simple URL parameters to export directly without having to automate the interface.
The concept is relatively simple:
Create a web browser session to the report page
Click on the export to CSV button
this will try to open another link in a new window which we need to suppress!
Capture the url from the new window
Download the export file using the same session context
We can't use the web browser control for this, because it's interface is UI driven.
We can't use HttpWebRequest or WebClient to execute the javascript against the HTMl DOM, we have to use a Web Browser to achieve this.
The other issue that comes up is that we cannot simply use the WebBrowser NewWindow or FileDownload events on the control as these events do not provide information such as the Url for the new windows or the file download source or target.
Instead we must reference the internal COM Browser (effectively IE) and use the native NewWindow3 event to capture the url to Reserved.ReportViewerWebControl.axd so we can download it manually.
I use these main references to explain the technique
Get URL for WebBrowser.NewWindow Event
automated filedownload using webbrowser without url
Finally, as I mentioned above, we cannot use the Web Browser to directly download the file from the URL as it will popup the SAVE AS dialog in a new web browser or save directly to the configured Downloads folder.
As described in the reference article we use the GetGlobalCookies method from Erika Chinchio which can be found in the excellent article provided by #Pedro Leonardo (available here)
I've put all this into a simple console app that you can run, just change the url to your report, the title of the export link and the save path:
The following is how I obtained the link that I wanted to download, the exact link title and composition will vary depending on the implementation:
class Program
{
[STAThread]
static void Main(string[] args)
{
SaveReportToDisk("http://localhost:13933/reports/sqlversioninfo", "CSV (comma delimited)", "C:\\temp\\reportDump.csv");
}
/// <summary>
/// Automate clicking on the 'Save As' drop down menu in a report viewer control embedded at the specified URL
/// </summary>
/// <param name="sourceURL">URL that the report viewer control is hosted on</param>
/// <param name="linkTitle">Title of the export option that you want to automate</param>
/// <param name="savepath">The local path to save to exported report to</param>
static void SaveReportToDisk(string sourceURL, string linkTitle, string savepath)
{
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(sourceURL);
//wait for the page to load
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// We want to find the Link that is the export to CSV menu item and click it
// this is the first link on the page that has a title='CSV', modify this search if your link is different.
// TODO: modify this selection mechanism to suit your needs, the following is very crude
var exportLink = wb.Document.GetElementsByTagName("a")
.OfType<HtmlElement>()
.FirstOrDefault(x => (x.GetAttribute("title")?.Equals(linkTitle, StringComparison.OrdinalIgnoreCase)).GetValueOrDefault());
if (exportLink == null)
throw new NotSupportedException("Url did not resolve to a valid Report Viewer web Document");
bool fileDownloaded = false;
// listen for new window, using the COM wrapper so we can capture the url
(wb.ActiveXInstance as SHDocVw.WebBrowser).NewWindow3 +=
(ref object ppDisp, ref bool Cancel, uint dwFlags, string bstrUrlContext, string bstrUrl) =>
{
Cancel = true; //should block the default browser from opening the link in a new window
Task.Run(async () =>
{
await DownloadLinkAsync(bstrUrl, savepath);
fileDownloaded = true;
}).Wait();
};
// execute the link
exportLink.InvokeMember("click");
//wait for the page to refresh
while (!fileDownloaded) { Application.DoEvents(); }
}
private static async Task DownloadLinkAsync(string documentLinkUrl, string savePath)
{
var documentLinkUri = new Uri(documentLinkUrl);
var cookieString = GetGlobalCookies(documentLinkUri.AbsoluteUri);
var cookieContainer = new CookieContainer();
using (var handler = new HttpClientHandler() { CookieContainer = cookieContainer })
using (var client = new HttpClient(handler) { BaseAddress = documentLinkUri })
{
cookieContainer.SetCookies(documentLinkUri, cookieString);
var response = await client.GetAsync(documentLinkUrl);
if (response.IsSuccessStatusCode)
{
var stream = await response.Content.ReadAsStreamAsync();
// Response can be saved from Stream
using (Stream output = File.OpenWrite(savePath))
{
stream.CopyTo(output);
}
}
}
}
// from Erika Chinchio which can be found in the excellent article provided by #Pedro Leonardo (available here: http://www.codeproject.com/Tips/659004/Download-of-file-with-open-save-dialog-box),
[System.Runtime.InteropServices.DllImport("wininet.dll", CharSet = System.Runtime.InteropServices.CharSet.Auto, SetLastError = true)]
static extern bool InternetGetCookieEx(string pchURL, string pchCookieName,
System.Text.StringBuilder pchCookieData, ref uint pcchCookieData, int dwFlags, IntPtr lpReserved);
const int INTERNET_COOKIE_HTTPONLY = 0x00002000;
private static string GetGlobalCookies(string uri)
{
uint uiDataSize = 2048;
var sbCookieData = new System.Text.StringBuilder((int)uiDataSize);
if (InternetGetCookieEx(uri, null, sbCookieData, ref uiDataSize,
INTERNET_COOKIE_HTTPONLY, IntPtr.Zero)
&&
sbCookieData.Length > 0)
{
return sbCookieData.ToString().Replace(";", ",");
}
return null;
}
}
The reason I advise to talk to the developer before going down the screen scraping rabbit hole is that as a standard when I use the report viewer control I always try to implement the SSRS native rc: and rs: URL parameters or atleast make sure I provide a way to export reports directly via url.
you cannot use these parameters out of the box, they are designed to be used when you are querying the SSRS Server directly, which your example does not.
I didn't come up with this on my own, no idea which resource I learnt it from but that means there is a chance others have come up with a similar conclusion. I implement this mainly so I can use these concepts throughout the rest of the application. But also when reports are concerned, one of the reasons we choose SSRS and RDLs as a reporting solution is it's versatility, we write the report definition, the controls allow users to consume them however they need to. If we have limited to ability for the user to export reports, we have really under utilized the framework.

How to download and upload file to SkyDrive via Live Connect SDK

I have an app that I want to download & upload a simple .txt file with a URL inside. I have downloaded Live Connect SDK V5.4, referenced the documentation, but it appears that the documentation is incorrect. The sample code uses event handlers for when a download/upload is complete, but that no longer can be used in V5.4.
I have two methods, downURL & upURL. I have started working on downURL:
private async void downURL()
{
try
{
LiveDownloadOperationResult download = await client.DownloadAsync("URL.txt");
}
catch { }
}
I am not sure what I am suppose to use for the path, I put "URL.txt" for now, I've seen some examples with "/me/". Do I need this? The file does not need to be visible to the user, as the user can't really do anything with it, but it is vital for the app to work.
My question is how do I use the LiveDownloadOperationResult download to save the file to Isolated Storage Settings, get the text contents, and put that in a string? Also, if you know how to upload the file back up, the upload event handler looks the same (but without the Result variable).
This code help you download content a file which you want. It get content have format OpenXML
Here, "item.id" is Id of "URL.txt".
private async void downURL()
{
try
{
LiveDownloadOperationResult operationResult = await client.DownloadAsync(item.id + "/Content?type=notebook");
StreamReader reader = new StreamReader(operationResult.Stream);
string Content = await reader.ReadToEndAsync();
}
catch { }
}

How not to abort http response c#

I need to run several methods after sending file to a user for a download. What happens is that after I send a file to a user, response is aborted and I can no longer do anything after response.end().
for example, this is my sample code:
Response.Clear();
Response.AddHeader("content-disposition", "attachment; filename=test.pdf");
Response.ContentType = "application/pdf";
byte[] a = System.Text.Encoding.UTF8.GetBytes("test");
Response.BinaryWrite(a);
Response.End();
StartNextMethod();
Response.Redirect(URL);
So, in this example StartNextMethod and Response.Redirect are not executing.
What I tried is I created a separate handler(ashx) with the following code:
public void ProcessRequest(HttpContext context)
{
context.Response.Clear();
context.Response.AddHeader("content-disposition", "attachment; filename=test.pdf");
context.Response.ContentType = "application/pdf";
byte[] a = System.Text.Encoding.UTF8.GetBytes("test");
context.Response.BinaryWrite(a);
context.Response.End();
}
and call it like this:
Download d = new Download();
d.ProcessRequest(HttpContext.Current);
StartNextMethod();
Response.Redirect(URL);
but the same error happen. I've tryied to replace Response.End with CompleteRequest but it doesn't help.
I guess the problem is that I'm using HttpContext.Current but should use a separate response stream. Is that correct? how do I do that in a separate method generically (Assume that I want my handler to accept byte array of data and content type and be downloadable from a separate response. I really do not want to use a separate page for a response.
UPDATE
I still didn't find a good solution. I'd like to do some actions after user has downloaded a file, but without using a separate page for a response\request thing.
Update
Since you said no second page, do this instead. Add a section to your page that checks for a query string parameter (something like fileid, or path, etc...). If this value is present then it initiates the download process using your existing code. If this value is not present then it runs like normal.
Now when the user clicks the download link you perform a post back (which you are already doing). In this post back create an iFrame on the page and set the URL of the iFrame to your pages URL with the added query string parameter (mypage.aspx?id=12664 or ?download=true, something like that). After creating the iframe perform what ever additional databinds/etc... you wish too.
Example
- http://encosia.com/ajax-file-downloads-and-iframes/
This above linked example uses an iFrame and an update panel, just like you are talking about.
Original Post
Response.Flush will allow you to continue processing after you send the file to the user, or just don't call Response.End (you don't really need too).
However Daniel A. White is correct, you can't actually redirect from your code after you send a file, you will get an error if you try. BUT you can continue to perform other server side operations if you need to.
Other answers agree with the general consensus, you can't redirect after a file starts downloading: https://stackoverflow.com/a/822732/328968 (PHP, but same concepts since it involves HTTP in general). or Directing to a new page after downloading a file.
Response.End() throws a thread abort exception. It is designed to end your response.
No code after that will process in that thread.
The End method causes the Web server to stop processing the script and return the current result. The remaining contents of the file are not processed.
What is it that you are trying to achieve?
If your purpose it to allow the pdf to download and then take the user to some other page, a little javascript can help you out.
Add a script with a timer to set location.href to your redirected paged.
As the previous answers had stated - returning PDF file means to send HTTP headers. You cannot send another headers after that, and Response.Redirect() simply means to send HTTP 302.
If you don't want to have separate page, or if you don't want to use AJAX, why not trying:
<head>
<meta http-equiv="refresh" content="3; url=http://www.site.com/download.aspx?xxxx">
</head>
Actually this will show the desired page you want to show to the user, and will refresh the page after 3 sec with the URL for download of the PDF file.
Download the file in chunks, as illustrated File Download in ASP.NET and Tracking the Status of Success/Failure of Download or in the answer to this question. When the last chunk of the file has been written to the client you can execute the code you need to. (Doesn't have to be at the end, can be anywhere in between depending upon your needs.)
the user clicks on a download button on WebForm1.aspx to start downloading a file. then, after the file download is done (served by WebForm2.aspx), user is automatically redirected.
WebForm1.aspx
<script type="text/javascript">
$(document).ready(function () {
$('#btnDL').click(function () {
$('body').append('<iframe src="WebForm2.aspx" style="display:none;"></iframe>');
return true;
});
});
</script>
<asp:Button runat="server" ID="btnDL" ClientIDMode="Static" Text="Download" OnClick="btnDL_Click" />
WebForm1.aspx.cs
protected void btnDL_Click(object sender, EventArgs e)
{
var sent = Session["sent"];
while (Session["sent"]==null)
{// not sure if this is a bad idea or what but my cpu is NOT going nuts
}
StartNextMethod();
Response.Redirect(URL);
}
WebForm2.aspx.cs
protected void Page_Load(object sender, EventArgs e)
{
Response.Clear();
Response.AddHeader("content-disposition", "attachment; filename=test.pdf");
Response.ContentType = "application/pdf";
byte[] a = System.Text.Encoding.UTF8.GetBytes("test");
Response.BinaryWrite(a);
Session["sent"] = true;
}
Global.asax.cs
protected void Session_Start(object sender, EventArgs e)
{
Session["init"] = 0; // init and allocate session data storage
}
note: make sure don't use ashx (generic handler) to serve your download. for some reason, the session in ashx and aspx don't talk to each other, unless you implement this.
Just remove the context.Response.End(); because you are redirecting anyway...
The problem is flawed logic here.... Why would you end the response?
Get the PDF and display a link to it or use a META refresh to redirect to the location of the PDF or you could also display a link or use a combination of both techniques.
I believe what you are trying won't work.
This is what I would do:
Write content to a file locally and assign it an unique id
send user to the next page that contains a hidden frame that perform a request with the unique id (javascript)
hidden request page loads file and push on the content stream.
This is the same behavior a lot of file download sites is using. Only issue is if the hidden frame fails (javascript turned off) to perform the request, why a lot of the same sites have the link available if the auto request fails.
Disadvantage: file cleanup.
I recommend this solution :
Don't use response.End();
Declare this global var : bool isFileDownLoad;
Just after your (Response.BinaryWrite(a);) set ==> isFileDownLoad = true;
Override your Render like :
///
/// AEG : Very important to handle the thread aborted exception
///
///
override protected void Render(HtmlTextWriter w)
{
if (!isFileDownLoad) base.Render(w);
}

download the html code rendered by asp.net web sites

I have to download and parse a website which is rendered by ASP.NET. If I use the code below I only get half of the page without the rendered "content" that I need. I would like to get the full content that I can see with Firebug or the IE Developer Tool.
How can I do this. I didn#t find a solution.
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(URL);
HttpWebResponse response = (HttpWebResponse)req.GetResponse();
StreamReader streamReader = new StreamReader(response.GetResponseStream());
string code = streamReader.ReadToEnd();
Thank you!
UPDATE
I tried the webcontrol solution. But it didn't work. I have in a WPF Project and use the following code and don't even get the content of a website. I don't see my mistake right now :( .
System.Windows.Forms.WebBrowser webBrowser = new System.Windows.Forms.WebBrowser();
Uri uri = new Uri(myAdress);
webBrowser.AllowNavigation = true;
webBrowser.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(wb_DocumentCompleted);
webBrowser.Navigate(uri);
private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
System.Windows.Forms.WebBrowser wb = sender as System.Windows.Forms.WebBrowser;
string tmp = wb.DocumentText;
}
UPDATE 2
That's the code I came up with in the meantime.
However I don't get any output. My elementCollection doesn't return any values.
If I can get the html source as a string I'd be happy and parse it with the HtmlAgilityPack.
(I don't want to incoporate the browser into my XMAL code)
Sorry for getting on your nerves!
Thank you!
WebBrowser wb = new WebBrowser();
wb.Source = new Uri(MyURL);
HTMLDocument doc = (HTMLDocument)wb.Document;
IHTMLElementCollection elementCollection = doc.getElementsByName("body");
foreach (IHTMLElementCollection element in elementCollection)
{
tb.Text = element.toString();
}
If the page you're referring to has IFrames or other dynamic loading mechanisms, the use of HTTPWebRequest would'nt be enough. a better solution would be (if possible) to use a WebBrowser control
The answer might be that the content of the web site is rendered with JavaScript - probably with some AJAX calls that fetch additional data from the server to build the content. Firebug and IE Developer Tool will show you the rendered html code, but if you choose 'view source', you should see the same same html as the one that you fetch with the code.
I would use a tool like the Fiddler Web Debugger to monitor what the page downloads when it is rendered. You might be able to get the needed content by simulating the AJAX requests that the page makes.
Note that it can be a b*tch to simulate browsing ASP.NET web site if the navigation has been made with post backs, because you will need to include the value of all the form elements (including the hidden view state) when simulation clicks on links.
Probably not an answer, but you might use the WebClient class to simplify your code:
WebClient client = new WebClient();
string html = client.DownloadString(URL);
Your code should be downloading the entire page. However, the page may, through JavaScript, add content after it's been loaded. Unless you actually run that JavaScript in a web browser, you won't see the entire DOM you see in Firebug.
You can try this:
public override void Render(HtmlTextWriter writer):
{
StringBuilder renderedOutput = new StringBuilder();
Streamwriter strWriter = new StringWriter(renderedOutput);
HtmlTextWriter tWriter = new HtmlTextWriter(strWriter);
base.Render(tWriter);
string html = tWriter.InnerWriter.ToString();
string filename = Server.MapPath(".") + "\\data.txt";
outputStream = new FileStream(filename, FileMode.Create);
StreamWriter sWriter = new StreamWriter(outputStream);
sWriter.Write(renderedOutput.ToString());
sWriter.Flush();
//render for output
writer.Write(renderedOutput.ToString());
}
I will recommend you to use following rendering engine instead of the Web Browser
https://github.com/cefsharp/CefSharp

Update page after file download

I put together a download script after some wonderful help from stack overflow the other day. However I have now found that after the file has been downloaded I need to reload the page to get rid of the progress template on the aspx page. The code to remove the template worked before I added in the download code.
Code to remove progress template: upFinanceMasterScreen.Update();
I've tried calling putting this before and after the redirect to the IHttpHandler
Response.Redirect("Download.ashx?ReportName=" + "RequestingTPNLeagueTable.pdf");
public class Download : IHttpHandler {
public void ProcessRequest(HttpContext context)
{
StringBuilder sbSavePath = new StringBuilder();
sbSavePath.Append(DateTime.Now.Day);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Month);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Year);
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.ContentType = "application/pdf";
HttpResponse objResponce = context.Response;
String test = HttpContext.Current.Request.QueryString["ReportName"];
HttpContext.Current.Response.AppendHeader("content-disposition", "attachment; filename=" + test);
objResponce.WriteFile(context.Server.MapPath(#"Reports\" + sbSavePath + #"\" + test));
}
public bool IsReusable { get { return true; } }
Thanks for any help you can provide!
When you send back a file for the user to download, that is the HTTP request. In other words, you can either have a post-back which refreshes the browser page or you can send a file for the user to download. You cannot do both without special tricks.
This is why most sites when you download a file, it first takes you to a new page that says, "Your download is about to begin", and then subsequently "redirects" you to the file to download using meta-refresh or javascript.
For example, when you go here to download the .NET 4 runtime:
http://www.microsoft.com/downloads/en/confirmation.aspx?FamilyID=0a391abd-25c1-4fc0-919f-b21f31ab88b7&displaylang=en&pf=true
It renders the page, then uses the following meta-refresh tag to actually give the user the file to download:
<META HTTP-EQUIV="refresh" content=".1; URL=http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe" />
You'll probably have to do something similar in your app. However, if you are truly interested in doing something after the file is completely downloaded, you're out of luck, as there's no event to communicate that to the browser. The only way to do that is an AJAX upload like gmail uses when you upload an attachment.
In my case, I was using MVC and I just wanted the page to refresh a few seconds after the download button was selected in order to show the new download count. I was returning the file from the controller.
To do this I simply changed the view by adding an onclick event to the download button that called the following script (also in the view):
setTimeout(function () {
window.location.reload(1);
}, 5000);
It fit my purpose... hope it helps someone else.
This is quick and easy to hack if needed.
Step 1: Add hidden button to .aspx page:
<asp:Button ID="btnExportUploaded" runat="server" Text="Button" style="visibility:hidden" OnClick="btnExportUploaded_Click" CssClass="btnExportUploaded" />
Step 2: Perform your default postback action and at the end register a startup script with jquery call which will trigger the hidden button click and cause a file to download:
ClientScriptManager cs = Page.ClientScript;
cs.RegisterStartupScript(this.GetType(), "modalstuff", "$('.btnExportUploaded').click();", true);
A simpler approach is to just do whatever needed in the PostBack event, and register a reload script with an additional argument to indicate the download.
Something like:
C# code:
protected void SaveDownloadCount(int downloadId)
{
// Run in a PostBack event.
// 1) Register download count, refresh page, etc.
// 2) Register a script to reload the page with an additional parameter to indicate the download.
Page.ClientScript.RegisterStartupScript(GetType(), "download",
"$(document).ready(function(){window.location.href = window.location.pathname + window.location.search ? '&' : '?' + 'printId={0}';});".Replace("{0}", downloadId.ToString()), true);
}
Then, in PageLoad we need to check for the download paramenter and serve the file:
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
int printId;
if (Request.QueryString["printId"] != null && int.TryParse(Request.QueryString["printId"], out printId))
{
// Check if the argument is valid and serve the file.
}
else
{
// Regular initialization
}
}
}
This is simalar to #puddleglum answer but without the drawback of the "out of synch" timeout.

Categories