I'm having trouble with closing a ModalPopup after a code sequence was executed.
I am coding a filebrowser for my company and everything works fine except of downloading files.
I use SignalR to invoke a virusscan before downloading the file.
My Code looks like this:
if (e.CommandName == "DownloadFile")
{
string filename = ((Button)e.Item.FindControl("bt_file")).Text;
if (transHub.doScanFile(filename, currentPathShort, shareType, MasterSessionID, SessionID, user))
{
Downloader.DownloadFile(HttpContext.Current, currentPath + #"\" + filename);
mpe_download.Hide();
}
else
{
lb_download_status.Text = "Virus found!";
mpe_download.Show();
}
}
The Download itself works fine but the modalpopup i am using to show the virusscan process is not closing when the download starts. I open the Popup from clientside JavaScript:
$(".download").on("click", function () {
$find("mpe_download_bhvr").show();
$("#download-progress").progressbar({ value: false });
})
Can you help me closing the PopUp just as the Download starts? Or am i doing it completely wrong?
I found no real solution for this but i did a small workaround.
I invoke the download from JavaScript now and move the files zipped from my UserShare(Server with the files on it) to my Webserver and generate a direct link then. This is pretty secure and there is the possibility to download multiple files simultaneously. So im not using the Downloadhandler anymore and i can use a jQuery Dialog Popup to show informations.
Related
I am developing an application that allows the user to download an excel file with regular content(not bigger then a few Mb).
On IE9 the file gets downloaded perfectly, but on IE8 some of the pages that allow the download does not work.
A new page is opens and closed right away without showing the download bar.
The cache control header is set to private.
I have disabled all of my IE8 add ones.
I have matched the response from the server for both the page that does allow the file saving and the one that does not work and they match exactly ( apart from the path )
I dont know why on some cases the file gets download perfectly and on others it dont.
Here is the server side code that I use to download the file:
protected void GetExportedFile()
{
string filename = Form("filename");
if (string.IsNullOrEmpty(filename))
{
Logger.Instance.Write("GetExportedFile is missing the parameter filename");
Response.Redirect("ErrorPage.aspx");
}
string filePath = Context.Server.MapPath("****/****/" + filename);
Response.ClearHeaders();
Response.ClearContent();
SetContentType(ContentType.Excel);
Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", filename));
Response.WriteFile(filePath);
Response.Flush();
try
{
File.Delete(filePath);
}
catch (Exception ex)
{
Logger.Instance.Write(
"GetExportedFile failed to delete the file '" + filePath +
"', Error: " + ex.ToString(), "Error");
}
try
{
Response.End();
}
catch (ThreadAbortException ex)
{
//Don't add anything here.
//because if you write here in Response.Write,
//that text also will be added to your text file.
}
}
I have to mention although I dont think it is relevant that prior to the downloads that don't work on IE8 I am making some ajax calls to get notify if the excel generation has finished, while on the page that does work I dont do this procedure.
I will also like to add that my application resides behind an application firewall (F5) and when deactivated makes all of the downloads work on IE8, the issue is I am not not seeing any changes in the response.
thanks
If anyone see this post, I have found the reason for the problem.
IE8 has a security policy that will not allow a file download to be invoked directly from a script request.
Since I have invoked a series of ajax calls to the server querying the file creation state and when the file was ready issued a download call, IE has canceled it.
To override IE8 policy, when the file creation has finished I have poped the client a window with a link to the file , when that link was clicked the file got downloaded successfully.
I hope it helps someone one day...
I have been working on my first C# application lately and I finished working on the interface - it's perfect.
Now I am stuck at the part where I get an Open File dialog and then send that file to my PHP script located on localhost (wamp).
This is what I've been trying to do:
private void menu_upload_file_Click(object sender, EventArgs e)
{
DialogResult dialogOpened = openFileDialog1.ShowDialog();
if (dialogOpened == DialogResult.OK)
{
string filename = openFileDialog1.FileName;
using (var client = new WebClient())
{
client.UploadFile("http://dugi/imgitv3/upload.php?submit=true&action=upload", "POST", filename);
}
}
}
Nothing is happening and the file is not being sent (copied over) to my desired location.
You might understand that I am succeeding on opening the Open File dialog, but I have no clue if I am catching the filename/directory and send it over to my website so it can deal with it.
Maybe the url is complicated? I need to send those submit and action parameters or the script wont load.
OH AND: In PHP, the $_FILES array MUST BE named images ($_FILES['images']), maybe this is the problem? (thought of this while asking here).
So what is the problem that the file is not being sent to my website?
Assuming the client.UploadFile("http://dugi/imgitv3/upload.php?submit=true&action=upload", "POST", filename); is not throwing an exception try to look at the response.
var response = client.UploadFile("http://dugi/imgitv3/upload.php?submit=true&action=upload", "POST", filename);
Console.WriteLine("\nResponse Received.The contents of the file uploaded are:\n{0}",
System.Text.Encoding.ASCII.GetString(response));
The problem was in my PHP script. I was always looking for $_FILES['images'] there and nothing else. Then after some googles, I noticed that C# sends the file array named with file ($_FILES['file']).
I did the necessary changes in my PHP script and now everything is working.
In one of our project we need the functionality to download a file from server to client location.
For this we are using an ashx handler to do the operation. Its working perfectly and we are able to download files.
Now we need a requirement like we need to update a field when a download is started and completed. Is there any way to do this.
Once we click the download link the Save as dialog box will appear and after that i think we don't have any control to check the progress. I think we even don't know which button is clicked ie we don't know whether the user is clicked a 'Yes' or 'No'.
Can anyone please suggest a method to know when the download is started and when it has been completed? We are using Asp.Net 2.0 with c#.
The handler used for download is given below
string fileUrl = string.Empty;
if (context.Request["fileUrl"] != null)
{
fileUrl = context.Request["fileUrl"].ToString();
}
string filename = System.IO.Path.GetFileName(fileUrl);
context.Response.ClearContent();
context.Response.ContentType = "application/exe";
context.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", filename));
context.Response.TransmitFile(fileUrl);
context.Response.Flush();
The file is downloaded from an aspx page method like
private void DownloadExe()
{
string downloadUrl = "Test.exe");
Response.Redirect("Test.ashx?fileUrl=" + downloadUrl, false);
}
Your ASHX handler knwos if download started (since it is actually get called) and when download is completed (end of handler is reached). You may even get some progress server side if you are writing response manually in chunks, this way you also may be able to detect some cases when user cancels download (if writing to response stream fails at some point).
Depending on your needs you may be able to transfer this information to other pages (i.e. via session state) or simply store in some database.
How about this:
Response.BufferOutput = false;
Response.TransmitFile(fileUrl);
//download complete code
If you disable response output buffering then it won't move past the line of code that sends the file to the client until the client has finished receiving it. If they cancel the download half way through it throws a HttpException so the download complete code doesn't get run.
You could also place your download complete code after your call to flush the buffer. But it's better not to enable buffering when sending large binary files to save on server memory.
Ok I had the same problem and jumped over this site:
Check over coockies
This works great for me.
I put together a download script after some wonderful help from stack overflow the other day. However I have now found that after the file has been downloaded I need to reload the page to get rid of the progress template on the aspx page. The code to remove the template worked before I added in the download code.
Code to remove progress template: upFinanceMasterScreen.Update();
I've tried calling putting this before and after the redirect to the IHttpHandler
Response.Redirect("Download.ashx?ReportName=" + "RequestingTPNLeagueTable.pdf");
public class Download : IHttpHandler {
public void ProcessRequest(HttpContext context)
{
StringBuilder sbSavePath = new StringBuilder();
sbSavePath.Append(DateTime.Now.Day);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Month);
sbSavePath.Append("-");
sbSavePath.Append(DateTime.Now.Year);
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.ContentType = "application/pdf";
HttpResponse objResponce = context.Response;
String test = HttpContext.Current.Request.QueryString["ReportName"];
HttpContext.Current.Response.AppendHeader("content-disposition", "attachment; filename=" + test);
objResponce.WriteFile(context.Server.MapPath(#"Reports\" + sbSavePath + #"\" + test));
}
public bool IsReusable { get { return true; } }
Thanks for any help you can provide!
When you send back a file for the user to download, that is the HTTP request. In other words, you can either have a post-back which refreshes the browser page or you can send a file for the user to download. You cannot do both without special tricks.
This is why most sites when you download a file, it first takes you to a new page that says, "Your download is about to begin", and then subsequently "redirects" you to the file to download using meta-refresh or javascript.
For example, when you go here to download the .NET 4 runtime:
http://www.microsoft.com/downloads/en/confirmation.aspx?FamilyID=0a391abd-25c1-4fc0-919f-b21f31ab88b7&displaylang=en&pf=true
It renders the page, then uses the following meta-refresh tag to actually give the user the file to download:
<META HTTP-EQUIV="refresh" content=".1; URL=http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe" />
You'll probably have to do something similar in your app. However, if you are truly interested in doing something after the file is completely downloaded, you're out of luck, as there's no event to communicate that to the browser. The only way to do that is an AJAX upload like gmail uses when you upload an attachment.
In my case, I was using MVC and I just wanted the page to refresh a few seconds after the download button was selected in order to show the new download count. I was returning the file from the controller.
To do this I simply changed the view by adding an onclick event to the download button that called the following script (also in the view):
setTimeout(function () {
window.location.reload(1);
}, 5000);
It fit my purpose... hope it helps someone else.
This is quick and easy to hack if needed.
Step 1: Add hidden button to .aspx page:
<asp:Button ID="btnExportUploaded" runat="server" Text="Button" style="visibility:hidden" OnClick="btnExportUploaded_Click" CssClass="btnExportUploaded" />
Step 2: Perform your default postback action and at the end register a startup script with jquery call which will trigger the hidden button click and cause a file to download:
ClientScriptManager cs = Page.ClientScript;
cs.RegisterStartupScript(this.GetType(), "modalstuff", "$('.btnExportUploaded').click();", true);
A simpler approach is to just do whatever needed in the PostBack event, and register a reload script with an additional argument to indicate the download.
Something like:
C# code:
protected void SaveDownloadCount(int downloadId)
{
// Run in a PostBack event.
// 1) Register download count, refresh page, etc.
// 2) Register a script to reload the page with an additional parameter to indicate the download.
Page.ClientScript.RegisterStartupScript(GetType(), "download",
"$(document).ready(function(){window.location.href = window.location.pathname + window.location.search ? '&' : '?' + 'printId={0}';});".Replace("{0}", downloadId.ToString()), true);
}
Then, in PageLoad we need to check for the download paramenter and serve the file:
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
int printId;
if (Request.QueryString["printId"] != null && int.TryParse(Request.QueryString["printId"], out printId))
{
// Check if the argument is valid and serve the file.
}
else
{
// Regular initialization
}
}
}
This is simalar to #puddleglum answer but without the drawback of the "out of synch" timeout.
I want to print HTML from a C# web service. The web browser control is overkill, and does not function well in a service environment, nor does it function well on a system with very tight security constraints. Is there any sort of free .NET library that will support the printing of a basic HTML page? Here is the code I have so far, which does not run properly.
public void PrintThing(string document)
{
if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA)
{
Thread thread =
new Thread((ThreadStart) delegate { PrintDocument(document); });
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
}
else
{
PrintDocument(document);
}
}
protected void PrintDocument(string document)
{
WebBrowser browser = new WebBrowser();
browser.DocumentText = document;
while (browser.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
browser.Print();
}
This works fine when called from UI-type threads, but nothing happens when called from a service-type thread. Changing Print() to ShowPrintPreviewDialog() yields the following IE script error:
Error: dialogArguments.___IE_PrintType is null or not an object.
URL: res://ieframe.dll/preview.dlg
And a small empty print preview dialog appears.
You can print from the command line using the following:
rundll32.exe
%WINDIR%\System32\mshtml.dll,PrintHTML
"%1"
Where %1 is the file path of the HTML file to be printed.
If you don't need to print from memory (or can afford to write to the disk in a temp file) you can use:
using (Process printProcess = new Process())
{
string systemPath = Environment.GetFolderPath(Environment.SpecialFolder.System);
printProcess.StartInfo.FileName = systemPath + #"\rundll32.exe";
printProcess.StartInfo.Arguments = systemPath + #"\mshtml.dll,PrintHTML """ + fileToPrint + #"""";
printProcess.Start();
}
N.B. This only works on Windows 2000 and above I think.
I know that Visual Studio itself (at least in 2003 version) references the IE dll directly to render the "Design View".
It may be worth looking into that.
Otherwise, I can't think of anything beyond the Web Browser control.
Easy! Split your problem into two simpler parts:
render the HTML to PDF
print the PDF (SumatraPDF)
-print-to-default $file.pdf prints a PDF file on a default printer
-print-to $printer_name $file.pdf prints a PDF on a given printer
If you've got it in the budget (~$3000), check out PrinceXML.
It will render HTML into a PDF, functions well in a service environment, and supports advanced features such as not breaking a page in the middle of a table cell (which a lot of browsers don't currently support).
I tool that works very well for me is HiQPdf. https://www.hiqpdf.com/
The price is reasonable (starts at $245) and it can render HTML to a PDF and also manage the printing of the PDF files directly.
Maybe this will help. http://www.codeproject.com/KB/printing/printhml.aspx
Also not sure what thread you are trying to access the browser control from, but it needs to be STA
Note - The project referred to in the link does allow you to navigate to a page and perform a print without showing the print dialog.
I don't know the specific tools, but there are some utilities that record / replay clicks. In other words, you could automate the "click" on the print dialog. (I know this is a hack, but when all else fails...)