IE8 on some pages the download file does not work - c#

I am developing an application that allows the user to download an excel file with regular content(not bigger then a few Mb).
On IE9 the file gets downloaded perfectly, but on IE8 some of the pages that allow the download does not work.
A new page is opens and closed right away without showing the download bar.
The cache control header is set to private.
I have disabled all of my IE8 add ones.
I have matched the response from the server for both the page that does allow the file saving and the one that does not work and they match exactly ( apart from the path )
I dont know why on some cases the file gets download perfectly and on others it dont.
Here is the server side code that I use to download the file:
protected void GetExportedFile()
{
string filename = Form("filename");
if (string.IsNullOrEmpty(filename))
{
Logger.Instance.Write("GetExportedFile is missing the parameter filename");
Response.Redirect("ErrorPage.aspx");
}
string filePath = Context.Server.MapPath("****/****/" + filename);
Response.ClearHeaders();
Response.ClearContent();
SetContentType(ContentType.Excel);
Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}", filename));
Response.WriteFile(filePath);
Response.Flush();
try
{
File.Delete(filePath);
}
catch (Exception ex)
{
Logger.Instance.Write(
"GetExportedFile failed to delete the file '" + filePath +
"', Error: " + ex.ToString(), "Error");
}
try
{
Response.End();
}
catch (ThreadAbortException ex)
{
//Don't add anything here.
//because if you write here in Response.Write,
//that text also will be added to your text file.
}
}
I have to mention although I dont think it is relevant that prior to the downloads that don't work on IE8 I am making some ajax calls to get notify if the excel generation has finished, while on the page that does work I dont do this procedure.
I will also like to add that my application resides behind an application firewall (F5) and when deactivated makes all of the downloads work on IE8, the issue is I am not not seeing any changes in the response.
thanks

If anyone see this post, I have found the reason for the problem.
IE8 has a security policy that will not allow a file download to be invoked directly from a script request.
Since I have invoked a series of ajax calls to the server querying the file creation state and when the file was ready issued a download call, IE has canceled it.
To override IE8 policy, when the file creation has finished I have poped the client a window with a link to the file , when that link was clicked the file got downloaded successfully.
I hope it helps someone one day...

Related

Making Second HTTP Request (before first ends) closes connection to the first

I have a website that allows users to download files via clicking href links that will call an HTTP GET request and return the file (will prompt user in IE and auto-download in chrome)
public async Task<IActionResult> DownloadFile(DocumentFile file){
fileName = DocumentTask.RetrieveFiles(new string[1] { file.FileID }, file.Facility, UserData.Access).SingleOrDefault();
if (System.IO.File.Exists(fileName))
{
FileInfo info = new FileInfo(fileName);
return File(System.IO.File.ReadAllBytes(fileName), FileUtility.GetContentType(fileName), info.Name);
}
}
When a user clicks a download link:
<i class="file pdf outline icon"></i>TestFile.pdf
and than immediately after clicks another download link (before the first download link has returned a response) it appears the client closes the connection to the first request. It will give this error:
The remote host closed the connection. The error code is 0x800703E3.
This error gets thrown when the server attempts to return the file back to the user, but it can't because the connection has been closed.
Using the Chrome developer tools, I can see both requests getting queued however as soon as the second request is sent, it cancels the first request (closing the connection).
Chrome Developer Request Cancelled
Right now I have disabled the user from clicking another download link until the previous request has returned however I would like to know if there is a more elegant solution to allow for multiple requests to be sent and waited for?
(I have tested this in Chrome and IE 11 and both cancel the previous request sent)
Thanks in advance.
Maybe you could use a function with the form of the next one to reach your objetive, I've tested it and it doesn't stop receving files, let me know if this works for you:
public Stream Downloadfile(string filename)
{
WebOperationContext.Current.OutgoingResponse.Headers["Content-Disposition"] = "attachment; filename=" + filename;
WebOperationContext.Current.OutgoingResponse.ContentType = "application/octet-stream";
return File.OpenRead(filename);
}
I was unable to find an answer to my solution. However I was able to change the way my links worked. By changing the links target="_blank" it will open up a new page and close when the file has begun downloading. This stops the user from selecting a second link until the other returns, but also if they use the middle mouse button to open the link but not change tab focus, this stops the error from occurring. I think multiple requests from the same tab was the issue.

Can anyone figure out a way to navigate a directory using this FTP Client?

So I downloaded an FTP Client from code project, but I cannot figure out a way to navigate through the directories. The project link is at the bottom of the post.
Anyways, I am logging on to an FTP server with the following information
ServerIP: ftp.swfwmd.state.fl.us
Username: anonymous
Password: youremail#gmail.com
I need to navigate to the "pub" folder (i.e. it's the only folder where I have permission to write, delete, rename, make directories, etc), but there is no way to click on a directory and traverse said directory.
For reference, the directories look as such:
README
lost+found
pub
public
...etc
I want to be able to click on the pub folder and see everything in pub. I'm not sure if this was originally implemented in the project, but I want to be able to do this.
To accomplish navigating through the pub folder, I was thinking something along the lines of...
ftpRequest = (FtpWebRequest)WebRequest.Create("ftp://ftp.randomSite/text.txt");
However, I have zero experience with GUI and don't exactly know how to get this working when a user clicks on one of the directories listed above.
If someone could take a look at the project and figure out a way to traverse the directories using the GUI, I would be greatly indebted.
**EDIT: **
So I've tried the following:
FtpWebRequest reqFTP;
try
{
ListBox b1 = (ListBox)sender;
reqFTP = (FtpWebRequest)WebRequest.Create("ftp://" + ftpServerIP + "/" + b1.Text);
which gives me the correct uri ftp://ftp.swfwmd.state.fl.us/pub (I verified this in the watch window), but I am guessing the code below is suspect.
It throws an exception immediately when it tries to get the response. I have no idea why as everything looks fine.
reqFTP.UseBinary = true;
reqFTP.Credentials = new NetworkCredential(ftpUserID, ftpPassword);
FtpWebResponse response = (FtpWebResponse)reqFTP.GetResponse();
Stream ftpStream = response.GetResponseStream();
ftpStream.Close();
response.Close();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
//Update current directory
btnLstFiles_Click(sender, e);
which gives me the current
Link to Project
http://www.codeproject.com/Articles/17202/Simple-FTP-demo-application-using-C-Net
What I would do is handle a double-click event in lstFiles which would invoke a function that sends a command to the FTP server to change to the directory indicated by the clicked text.
I'm not familiar enough with the .Net FTP-related classes to know how to send the change directory command, so I would start by looking at the documentation for FtpWebRequest.
http://msdn.microsoft.com/en-us/library/System.Net.FtpWebRequest%28v=vs.110%29.aspx

How to identify whether a download is completed using an ashx handler

In one of our project we need the functionality to download a file from server to client location.
For this we are using an ashx handler to do the operation. Its working perfectly and we are able to download files.
Now we need a requirement like we need to update a field when a download is started and completed. Is there any way to do this.
Once we click the download link the Save as dialog box will appear and after that i think we don't have any control to check the progress. I think we even don't know which button is clicked ie we don't know whether the user is clicked a 'Yes' or 'No'.
Can anyone please suggest a method to know when the download is started and when it has been completed? We are using Asp.Net 2.0 with c#.
The handler used for download is given below
string fileUrl = string.Empty;
if (context.Request["fileUrl"] != null)
{
fileUrl = context.Request["fileUrl"].ToString();
}
string filename = System.IO.Path.GetFileName(fileUrl);
context.Response.ClearContent();
context.Response.ContentType = "application/exe";
context.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", filename));
context.Response.TransmitFile(fileUrl);
context.Response.Flush();
The file is downloaded from an aspx page method like
private void DownloadExe()
{
string downloadUrl = "Test.exe");
Response.Redirect("Test.ashx?fileUrl=" + downloadUrl, false);
}
Your ASHX handler knwos if download started (since it is actually get called) and when download is completed (end of handler is reached). You may even get some progress server side if you are writing response manually in chunks, this way you also may be able to detect some cases when user cancels download (if writing to response stream fails at some point).
Depending on your needs you may be able to transfer this information to other pages (i.e. via session state) or simply store in some database.
How about this:
Response.BufferOutput = false;
Response.TransmitFile(fileUrl);
//download complete code
If you disable response output buffering then it won't move past the line of code that sends the file to the client until the client has finished receiving it. If they cancel the download half way through it throws a HttpException so the download complete code doesn't get run.
You could also place your download complete code after your call to flush the buffer. But it's better not to enable buffering when sending large binary files to save on server memory.
Ok I had the same problem and jumped over this site:
Check over coockies
This works great for me.

Creating a hyperlink to a physical path on the local PC, to files not part of the web site

I have some files in a folder on the harddrive, like C:\ExtraContent\ that has some PDF files. This folder is not part of the website. I was able to successfully upload a PDF to this folder using the default ASP.NET FileUploader, no problem.
What I would like to do is, create a hyperlink that links to a PDF in that folder C:\ExtraContent\somePDF.pdf
I am able to get close using a Button with the following code:
protected void Button1_Click(object sender, EventArgs e)
{
WebClient client = new WebClient();
Byte[] buffer = client.DownloadData("C:\ExtraContent\somePDF.pdf");
Response.ContentType = "application/pdf";
Response.AddHeader("content-length", buffer.Length.ToString());
Response.BinaryWrite(buffer);
}
The above works in terms of opening the file. But I can't get this to work with an ASP.NET HyperLink.
The reason I want to use a HyperLink is so that the user can choose to right-click and Save As, to download a copy. If HyperLink controls can only link to relative paths, what can I do to get my desired result?
Note: making the files I'm trying to access part of the site is not practical for us.
Basically allowing access to the folder the way you describe is a real security risk (because it requires hacking at the permissions), isn't trivial and in general should be avoided. The way that you achieve your desired behaviour is something along these lines.
Firstly create a blank aspx or ashx page.
Secondly, either in the Page_Load or ProcessRequest you want to use code along the following lines
string filePath = "c:\\Documents\\Stuff\\";
string fileName = "myPath.pdf";
byte[] bytes = System.IO.File.ReadAllBytes(filePath + fileName);
context.Response.Clear();
context.Response.ContentType = "application/pdf";
context.Response.Cache.SetCacheability(HttpCacheability.Private);
context.Response.Expires = -1;
context.Response.Buffer = true;
context.Response.AddHeader("Content-Disposition", string.Format("{0};FileName=\"{1}\"", "attachment", fileName));
context.Response.BinaryWrite(bytes);
context.Response.End();
I haven't tested this and taken it from my head so it might need some tweeks but the above code should get you on the right track to cause the persons browser to begin downloading the file you provide.
EDIT: I just realized (after rereading your question) your problem was slightly different to what I thought, to get your issue resolved simply make the hyperlink button you are using link to a page that can process the request as described above. IE: An ashx or aspx page
You need to create a hyperlink to a page that acts as a 'proxy' so that the page will return a response that contains the file stream.
You cannot create a link to a file that is not prt of your site.

How to download a file from my webserver to my desktop?

I have looked around the internet for 3 hours now looking for a solution to my problem and I'm starting to wonder if anyone else has ever had this exact problem?
I am using IIS7 to host a website that does some local things around our office. Well, developing the site everything worked fine, but now that I am hosting the site I cannot (and neither can anyone else for that matter) click on the link to download the file that is needed. (lets just pretend they click on a link to download some random file from the webserver)
Well, it fails downloading the file from what I can guess as being a permissions error. I have looked for a solution but cannot seem to find one. I know very little about IIS7 so I do not understand very much about the Application Pool Identity stuff, although I did manage to grant full access for that identity to the file/folders. Anyways, here is the specific area it is messing up on..
This is a piece of the default.cshtml page that calls a Function from a .cs file:
//well.. pretty close to the exact thing, just got rid of a bunch of unecessary junk
#{
string tmp = Functions.downloadFile(fileName)
}
<html>
tmp
</html>
This is the part of the .cs file that actually downloads the file to the desktop
public static string downloadFile(string fileName) //i know this example doesnt
//use filename, but the original code does.
{
if (Directory.Exists("C:\WebSite\thingsToDownload"))
{
string[] files = Directory.GetFiles("C:\WebSite\thingsToDownload");
foreach (string s in files)
{
string[] tmp = s.Split('\\');
try
{
File.Copy(s,
Environment.GetFolderPath(Environment.SpecialFolder.Desktop)
+ "\\" + tmp[tmp.Length - 1]);
}
catch (Exception e)
{
return "ERROR COPYING: " + e.Message;
}
}
return "GOOD";
}
return "DIRECTORY DOESNT EXIST";
}
After that is all said and done, i get "ERROR COPYING: Access to the path '\fileName' is denied.
My Guess is that the webserver does not have access to the person's desktop in order to put the files there.
If anyone can shed some light and help me get this it would be greatly appreciated!
If you want to download a file by clicking a link you should use response.
Here's a sample you can use for example when a user clicks a link or button or something:
const string fName = #"C:\picture.bmp";
FileInfo fi = new FileInfo(fName);
long sz = fi.Length;
Response.ClearContent();
Response.ContentType = Path.GetExtension(fName);
Response.AddHeader("Content-Disposition", string.Format("attachment; filename = {0}",System.IO.Path.GetFileName(fName)));
Response.AddHeader("Content-Length", sz.ToString("F0"));
Response.TransmitFile(fName);
Response.End();
It worked when you developed it, because it was running on your computer and thus had access to your desktop. There is no such thing as download to desktop. You can serve files and let the user decide where to save the file; on the desktop or elsewhere. But you have to do this over http, not directly from the server.

Categories