I am fairly new to Silverlight. I am trying to download a .pdf file (and a couple of other formats) in Silverlight. The user clicks a button, the system goes and gets the URI, then shows a SaveFileDialog to obtain the location to save the file. Here is a code snippet:
WebClient wc = new WebClient();
wc.DownloadStringCompleted += (s, e3) =>
{
if (e3.Error == null)
{
try
{
byte[] fileBytes = Encoding.UTF8.GetBytes(e3.Result);
using (Stream fs = (Stream)mySaveFileDialog.OpenFile())
{
fs.Write(fileBytes, 0, fileBytes.Length);
fs.Close();
MessageBox.Show("File successfully saved!");
}
}
catch (Exception ex)
{
MessageBox.Show("Error getting result: " + ex.Message);
}
}
else
{
MessageBox.Show(e3.Error.Message);
};
wc.DownloadStringAsync("myURI", UriKind.RelativeOrAbsolute));
The file gets saved OK, but it is about twice as big as the original and is unreadable. e3.Result looks about the right size (5Mb), but I suspect it contains a lot of extraneous characters. FileBytes seems to be about two times too big (11Mb). I wanted to try DownloadDataAsync instead of DownloadStringAsync (hoping it would resolve any encoding issues), but Silverlight has a very cut-down version of System.Net.WebClient and does not support DownloadDataAsync (it won't compile).
I am fairly sure it is an encoding problem, but I cannot see how to get around it.
PDF files are binary and not encoded using UTF8. To download a PDF file using Silverlight you need to use the OpenReadAsync method of the WebClient class to start downloading the binary data of the file, and not the DownloadStringAsync method as you seem to be doing.
Instead of handling the DownloadStringCompleted event you should handle the OpenReadCompleted event and write the received bytes to the stream of the local PDF file. If you set the AllowReadStreamBuffering to true the OpenReadCompleted event is only fired when the entire file has been downloaded providing you with the same behavior as the DownloadStringCompleted. However, the entire PDF file will be buffered in memory which may be a bad idea if the file is very large.
Related
I have a web app running aspx, it is a bit of a legacy app. During submit, users upload supporting documents, usually pictures converted to pdf, to our sql server where they are stored as binaries and they can downloaded again at various times during approval.
However, we have just started getting an issue now where our users can not open the pdf's in adobe and get the dreaded "The file is damaged and could not be repaired." error message. They can still be opened in MS Edge, so they are not actually corrupted. I have verified that the pdf's can be opened fine before being uploaded.
HttpPostedFile file = this.attachmentUploader.PostedFile;
if (file == null)
{
file = Session["postedFile"] as HttpPostedFile;
}
if (file != null)
{
var fileName = this.attachmentUploader.FileName;
fileName = fileName.Length >= 100 ? string.Concat(fileName.Substring(0, 50).Trim(), ".pdf") : fileName;
Attachment attachment = new Attachment()
{
FileName = fileName,
File = this.attachmentUploader.FileBytes
};
db.Attachments.Add(attachment);
db.SaveChanges();
}
This is the download code
byte[] file = null;
// Code here to pull file from db
if (file != null)
{
Response.Buffer = true;
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", "attachment;filename=support_doc.pdf");
Response.OutputStream.Write(file, 0, file.Length);
}
Any help appreciated!
The downloaded file actually consists of two concatenated files, the actual PDF and a HTML file.
The HTML file is nearly 70 KB in size, and in the absence of external JavaScript and images it looks like this:
[--- image removed for privacy reasons ---]
I assume that after your "download code" some other code adds this HTML to the output.
You might want to search that code, or you might want to simply close the Response.OutputStream and finish the response right after Response.OutputStream.Write(file, 0, file.Length).
According to the PDF specification a PDF processor has to start reading a PDF from its end where there are cross reference information, but in case of the file at hand there are nearly 70 KB of trash as far as PDF syntax is concerned.
Thus, it is ok for any PDF viewer to reject the file as invalid.
When using the following code to download a file:
WebClient wc = new WebClient();
wc.DownloadFileCompleted += new System.ComponentModel.AsyncCompletedEventHandler(wc_DownloadFileCompleted);
wc.DownloadFileAsync("http://path/file, "localpath/file");
and an error occurs during the download (no internet connection, file not found, etc.)
it allocates a 0-byte file in localpath/file which can get quite annoying.
is there a way to avoid that in a clean way?
(i already just probe for 0 byte files on a download error and delete it, but i dont think that is the recommended solution)
If you reverse engineer the code for WebClient.DownloadFile you will see that the FileStream is instantiated before the download even begins. This is why the file will be created even if the download fails. There's no way to ammend that code so you should cosnider a different approach.
There are many ways to approach this problem. Consider using WebClient.DownloadData rather than WebClient.DownloadFile and only creating or writing to a file when the download is complete and you are sure you have the data you want.
WebClient client = new WebClient();
client.DownloadDataCompleted += (sender, eventArgs) =>
{
byte[] fileData = eventArgs.Result;
//did you receive the data successfully? Place your own condition here.
using (FileStream fileStream = new FileStream("C:\\Users\\Alex\\Desktop\\Data.rar", FileMode.Create))
fileStream.Write(fileData, 0, fileData.Length);
};
client.DownloadDataAsync(address);
client.Dispose();
I have a C# Windows Phone 7.1 app that downloads a PDF file from a foreign web server and then (tries) to save it to the isolated storage area as a file. I have tried several different ways to get this done, but the file always ends up about 30% too large and when I open it up in a text editor, instead of seeing the USUAL 'PDF' characters at the start of the file followed by the encoded characters, I see basically junk. The test file I'm using is supposed to be 161k but when I view the file with the Isolated Storage Explorer, it's 271k.
First I download the file to a string. I inspected the string at this point in the debugger and it does contain the proper values and it is the correct length. The trouble happens when I try to write it to the isolated storage area. I tried both StreamWriter & BinaryWriter with identical invalid results. The contents of the resulting file appears to be a long stream of junk characters. Note, I am deleting the file if it exists just in case, before writing out the contents. Below is my code using the BinaryWriter version. What is wrong?
async public static Task URLToFileAsync(
string strUrl,
string strDestFilename,
IProgress<int> progress,
CancellationToken cancelToken)
{
strUrl = strUrl.Trim();
if (String.IsNullOrWhiteSpace(strUrl))
throw new ArgumentException("(Misc::URLToFileAsync) The URL is empty.");
strDestFilename = strDestFilename.Trim();
if (String.IsNullOrWhiteSpace(strDestFilename))
throw new ArgumentException("(Misc::URLToFileAsync) The destination file name is empty.");
// Create the isolated storage file.
// FileStream fs = Misc.CreateIsolatedStorageFileStream(strDestFilename);
IsolatedStorageFile isoStorage = IsolatedStorageFile.GetUserStoreForApplication();
// Delete the file first.
if (isoStorage.FileExists(strDestFilename))
isoStorage.DeleteFile(strDestFilename);
IsolatedStorageFileStream theIsoStream = isoStorage.OpenFile(strDestFilename, FileMode.Create);
FileStream fs = theIsoStream;
// If the stream writer is NULL, then the file could not be created.
if (fs == null)
throw new System.IO.IOException("(Misc::URLToFileAsync) Error creating or writing to the file named: " + strDestFilename);
BinaryWriter bw = new BinaryWriter(fs);
try
{
// Call URLToStringAsync() to get the web file as a string first.
string strFileContents = await URLToStringAsync(strUrl, progress, cancelToken);
// >>>> NOTE: strFileContents looks correct and is the correct size.
// Operation cancelled?
if (!safeCancellationCheck(cancelToken))
{
// Note. BinaryWriter does not have an Async method so we take the hit here
// to do a synchronous operation.
// See this Stack Overflow post.
// http://stackoverflow.com/questions/10315316/asynchronous-binaryreader-and-binarywriter-in-net
// >>>> NOTE: strFileContents.ToCharArray() looks correct and is the correct length.
bw.Write(strFileContents.ToCharArray(), 0, strFileContents.Length);
} // if (safeCancellationCheck(cancelToken))
}
finally
{
// Make sure the file is cleaned up.
bw.Flush();
bw.Close();
// Make sure the file is disposed.
bw.Dispose();
} // try/finally
// >>>> NOTE: output file in Isolated Storage Explorer is the wrong size and contains apparently junk.
} // async public static void URLToFileAsync
You cannot download a binary into a string. The result will not be correct, as you have found out.
See this answer, which demonstrates how to download a binary file to isolated storage: https://stackoverflow.com/a/6909201/1822514
I am having issue with deleting file created just to send an email with attachment and then view it in browser. now i need to delete this file as this is created to just send email. how can i do this.
here is what i have got so far.
public void SendEmail()
{
EmailClient.Send(mailMessage);
//View PDF Certificate in Browser
ViewPDFinBrowser((string)fileObject);
DeleteGeneratedTempCertificateFile((string)fileObject));
}
public void ViewPDFinBrowser(string filePath)
{
PdfReader reader = new PdfReader(filePath);
MemoryStream ms = new MemoryStream();
PdfStamper stamper = new PdfStamper(reader, ms);
stamper.ViewerPreferences = PdfWriter.PageLayoutSinglePage | PdfWriter.PageModeUseThumbs;
stamper.Close();
Response.Clear();
Response.ContentType = "application/pdf";
Response.OutputStream.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length);
Response.OutputStream.Close();
HttpContext.Current.ApplicationInstance.CompleteRequest();
}
public static void DeleteGeneratedTempCertificateFile(Object fileObject)
{
string filePath = (string)fileObject;
if (File.Exists(filePath))
{
File.Delete(filePath);
}
}
So here are the steps i need when i call SendEmail()
1) Sends an email with the attachment --> Temp file created
2) view the temp file in the browser
3) delete the temp file
I can understand that as long as file is in response object, i can not do anything with it because i get the error message ("File used by another process). If i close the response stream then file will be deleted but then i cant view it in browser.
i was thinking if i can manage to somehow open the file to view in browser in new window on button click, i will be able to delete the file.
OR
i am thinking i can delete the file after 10 min. as user wont be on site viewing the PDF for more then 1-2 mins.
please advice me one of the solution with example code.
appreciate your time and help.
As others have said, it's better to use the MemoryStream as-is without writing temporary files to the disk. Sometimes implementations of 3rd party components just won't allow this and in such cases after writing the binary contents of the PDF file, be sure to call close (and/or possibly dispose, always check MSDN or 3rd party API docs what the .Close() actually does) to all streams that are no longer needed. In your case close ms and reader after completing the http request.
In most cases, consider implementing the using pattern. See http://msdn.microsoft.com/en-us/library/aa664736.aspx for more details. However remember that there are caveats to this approach too, for example in WCF clients which can cause exceptions within (and thus not actually disposing all contents inside the using clause).
Also, keep in mind any concurrency issues. Keep the temporary file name random enough and consider situations where the file already exists on the local disk (i.e. fail the operation and do not send out binary to the request which the requester is not supposed to see etc).
I have a website that has a bunch of PDFs that are pre-created and sitting on the webserver.
I don't want to allow a user to just type in a URL and get the PDF file (ie http://MySite/MyPDFFolder/MyPDF.pdf)
I want to only allow them to be viewed when I load them and display them.
I have done something similar before. I used PDFSharp to create a PDF in memory and then load it to a page like this:
protected void Page_Load(object sender, EventArgs e)
{
try
{
MemoryStream streamDoc = BarcodeReport.GetPDFReport(ID, false);
// Set the ContentType to pdf, add a header for the length
// and write the contents of the memorystream to the response
Response.ContentType = "application/pdf";
Response.AddHeader("content-length", Convert.ToString(streamDoc.Length));
Response.BinaryWrite(streamDoc.ToArray());
//End the response
Response.End();
streamDoc.Close();
}
catch (NullReferenceException)
{
Communication.Logout();
}
}
I tried to use this code to read from a file, but could not figure out how to get a MemoryStream to read in a file.
I also need a way to say that the "/MyPDFFolder" path is non-browsable.
Thanks for any suggestions
To load a PDF file from the disk into a buffer:
byte [] buffer;
using(FileStream fileStream = new FileStream(Filename, FileMode.Open))
{
using (BinaryReader reader = new BinaryReader(fileStream))
{
buffer = reader.ReadBytes((int)reader.BaseStream.Length);
}
}
Then you can create your MemoryStream like this:
using (MemoryStream msReader = new MemoryStream(buffer, false))
{
// your code here.
}
But if you already have your data in memory, you don't need the MemoryStream. Instead do this:
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Length", buffer.Length.ToString());
Response.BinaryWrite(buffer);
//End the response
Response.End();
streamDoc.Close();
Anything that is displayed on the user's screen can be captured. You might protect your source files by using a browser-based PDF viewer, but you can't prevent the user from taking snapshots of the data.
As far as keeping the source files safe...if you simply store them in a directory that is not under your web root...that should do the trick. Or you can use an .htaccess file to restrict access to the directory.
Keltex's code works for limiting who can get to the file. If the user isn't authorized for a particular file, give them a page with an error message, otherwise use that code to relay them the PDF. The URL then won't be directly to a PDF, but rather a script, so that will give you 100% control over who is permitted to access it.
Rather than putting the PDFs in question in an accessible location and messing with the configuration to hide them, you could put them someplace in the server that isn't directly web accessible. Since you'll have code reading the file into a buffer and relaying it to the user anyway, it doesn't matter where on the server the file is located, so long as it is accessible to your code.