I'm trying to make a pdf of a web page that is displaying locations on Google Maps. The only problem is that the Javascript isn't quite completing by the time that ABCpdf renders the pdf. It's incomplete. How can I make ABDpdf wait until the javascript is 100% complete before the pdf is rendered. Here is what I've tried so far.
Doc theDoc = new Doc();
string theURL = url;
// Set HTML options
theDoc.HtmlOptions.AddLinks = true;
theDoc.HtmlOptions.UseScript = true;
theDoc.HtmlOptions.PageCacheEnabled = false;
//theDoc.HtmlOptions.Engine = EngineType.Gecko;
// JavaScript is used to extract all links from the page
theDoc.HtmlOptions.OnLoadScript = "var hrefCollection = document.all.tags(\"a\");" +
"var allLinks = \"\";" +
"for(i = 0; i < hrefCollection.length; ++i) {" +
"if (i > 0)" +
" allLinks += \",\";" +
"allLinks += hrefCollection.item(i).href;" +
"};" +
"document.documentElement.abcpdf = allLinks;";
// Array of links - start with base URL
theDoc.HtmlOptions.OnLoadScript = "(function(){window.ABCpdf_go = false; setTimeout(function(){window.ABCpdf_go = true;}, 1000);})();";
ArrayList links = new ArrayList();
links.Add(theURL);
for (int i = 0; i < links.Count; i++)
{
// Stop if we render more than 20 pages
if (theDoc.PageCount > 20)
break;
// Add page
theDoc.Page = theDoc.AddPage();
int theID = theDoc.AddImageUrl(links[i] as string);
// Links from the rendered page
string allLinks = theDoc.HtmlOptions.GetScriptReturn(theID);
string[] newLinks = allLinks.Split(new char[] { ',' });
foreach (string link in newLinks)
{
// Check to see if we allready rendered this page
if (links.BinarySearch(link) < 0)
{
// Skip links inside the page
int pos = link.IndexOf("#");
if (!(pos > 0 && links.BinarySearch(link.Substring(0, pos)) >= 0))
{
if (link.StartsWith(theURL))
{
links.Add(link);
}
}
}
}
// Add other pages
while (true)
{
theDoc.FrameRect();
if (!theDoc.Chainable(theID))
break;
theDoc.Page = theDoc.AddPage();
theID = theDoc.AddImageToChain(theID);
}
}
// Link pages together
theDoc.HtmlOptions.LinkPages();
// Flatten all pages
for (int i = 1; i <= theDoc.PageCount; i++)
{
theDoc.PageNumber = i;
theDoc.Flatten();
}
byte[] theData = theDoc.GetData();
Response.Buffer = false; //new
Response.Clear();
//Response.ContentEncoding = Encoding.Default;
Response.ClearContent(); //new
Response.ClearHeaders(); //new
Response.ContentType = "application/pdf"; //new
Response.AddHeader("Content-Disposition", "attachment; filename=farts");
Response.AddHeader("content-length", theData.Length.ToString());
//Response.ContentType = "application/pdf";
Response.BinaryWrite(theData);
Response.End();
theDoc.Clear();
I had a very similar problem (rendering Google Visualization as PDF) and here's the trick that I used to partially solve it:
First of all, your JavaScript needs to be executed on DOMContentLoaded rather than on load (you will understand why just in a moment). Next create an empty page that will serve content by a timer (you can just use System.Threading.Thread.Sleep to make the page "wait" for a certain amount of time).
Then place a hidden image on the page that you want to render as PDF and that contains JavaScript that needs to be executed before the PDF can be produced. The "src" attribute of an image must have a URL pointing to your timer page (in the following example I specify the delay in milliseconds via query-string):
<img src="Timer.aspx?Delay=1000" style="width: 1px; height: 1px; visibility: hidden" />
Notice that I use visibility: hidden instead of display: none to hide the image. The reason is that some browsers might not start loading the image until it's visible.
Now what will happen is that ABCpdf will wait until the image is loaded while your JavaScript will be executing already (because the DOMContentLoaded is fired before load which waits until all images are loaded).
Of course you cannot predict how much time exactly do you need to execute your JavaScript. Another thing is that if ABCpdf is unable to load page within 15 seconds (default value but I think you can change it) it will throw an exception so be careful when choosing the delay.
Hope this helps.
In my case, we were upgrading v8 to v9 and generating a thumbnail image of a webpage that also required extensive javascript CSS manipulation for object positioning. When we switched to v9, we noticed the objects were duplicated (showing in their original position and the position that they were supposed to be located in after js).
The workaround that I applied was using the RenderDelay and OneStageRender properties to change how the page rendering is handled to PDF. The 500 is ms, so 1/2 second. The bigger culprit seemed to be the OneStageRender. That had to be disabled in order for rendering to handle properly.
doc.SetInfo(0, "RenderDelay", "500")
doc.SetInfo(0, "OneStageRender", 0)
Try making your script block into a javascript function, and call that function from the document.ready() function at the top of your file. I assume you're using jQuery. The ready() function will ensure all page elements have stabilized before it calls any functions in its body.
Related
What I want to accomplish:
I want to copy the active page in my Visio application to a new document and save it (and make it a byte[] for the db), I am already doing this but in a slightly "wrong" way as there is too much interaction with the Visio application.
Method to copy page to byte array:
private static byte[] VisioPageToBytes()
{
//Make a new invisible app to dump the shapes in
var app = new InvisibleApp();
Page page = MainForm.IVisioApplication.ActivePage;
app.AlertResponse = 2;
//Selact all shapes and copy, then deselect
MainForm.IVisioApplication.ActiveWindow.SelectAll();
MainForm.IVisioApplication.ActiveWindow.Selection.Copy();
MainForm.IVisioApplication.ActiveWindow.DeselectAll();
//Add empty document to invisible app and dump shapes
app.Documents.Add( string.Empty );
app.ActivePage.Paste();
//Save document and convert to byte[]
app.ActiveDocument.SaveAs( Application.UserAppDataPath + #"/LastStored.vsd" );
app.ActiveDocument.Close();
app.Quit();
app.AlertResponse = 0;
var bytes = File.ReadAllBytes( Application.UserAppDataPath + #"/LastStored.vsd" );
Clipboard.Clear();
return bytes;
}
Why it's wrong:
This code makes selections in the visio page and has to open an invisible window to store the page. I'm looking for a way with less interaction with the Visio application (as its unstable). The opening of the 2nd (invisible) Visio application occasionally makes my main Visio application crash.
I would like to do something like:
Page page = MainForm.IVisioApplication.ActivePage;
Document doc;
doc.Pages.Add( page ); //Pages.Add has no parameters so this doesn't work
doc.SaveAs(Application.UserAppDataPath + #"/LastStored.vsd");
If this is not possible in a way with less interaction (by "building" the document), please comment to let me know.
TL;DR;
I wan't to make a new Visio document without opening Visio and copy (the content of) 1 page to it.
If you want to create a copy page then you might find the Duplicate method on Page handy, but by the sounds of it just save the existing doc should work:
void Main()
{
var vApp = MyExtensions.GetRunningVisio();
var sourcePage = vApp.ActivePage;
var sourcePageNameU = sourcePage.NameU;
var vDoc = sourcePage.Document;
vDoc.Save(); //to retain original
var origFileName = vDoc.FullName;
var newFileName = Path.Combine(vDoc.Path, $"LastStored{Path.GetExtension(origFileName)}");
vDoc.SaveAs(newFileName);
//Remove all other pages
for (short i = vDoc.Pages.Count; i > 0; i--)
{
if (vDoc.Pages[i].NameU != sourcePageNameU)
{
vDoc.Pages[i].Delete(0);
}
}
//Save single page state
vDoc.Save();
//Close copy and reopen original
vDoc.Close();
vDoc = vApp.Documents.Open(origFileName);
}
GetRunningVisio is my extension method for using with LinqPad:
http://visualsignals.typepad.co.uk/vislog/2015/12/getting-started-with-c-in-linqpad-with-visio.html
...but you've already got a reference to your app so you can use that instead.
Update based on comments:
Ok, so how about this modification of your original code? Note that I'm creating a new Selection object from the page but not changing the Window one, so this shouldn't interfere with what the user sees or change the source doc at all.
void Main()
{
var vApp = MyExtensions.GetRunningVisio();
var sourcePage = vApp.ActivePage;
var sourceDoc = sourcePage.Document;
var vSel = sourcePage.CreateSelection(Visio.VisSelectionTypes.visSelTypeAll);
vSel.Copy(Visio.VisCutCopyPasteCodes.visCopyPasteNoTranslate);
var copyDoc = vApp.Documents.AddEx(string.Empty,
Visio.VisMeasurementSystem.visMSDefault,
(int)Visio.VisOpenSaveArgs.visAddHidden);
copyDoc.Pages[1].Paste(Visio.VisCutCopyPasteCodes.visCopyPasteNoTranslate);
var origFileName = sourceDoc.FullName;
var newFileName = Path.Combine(sourceDoc.Path, $"LastStored{Path.GetExtension(origFileName)}");
copyDoc.SaveAs(newFileName);
copyDoc.Close();
}
Note that this will only create a default page so you might want to include copying over page cells such as PageWidth, PageHeight, PageScale and DrawingScale etc. prior to pasting.
I have a page, behaviorAnalysis.aspx, that is calling a javascript that does two things: 1) Display a modal dialog with a please wait message; and 2) creating an iFrame and calling a second page, behaviorAnalysisDownload.aspx via jQuery:
function isMultiPageExport(exportMedia) {
waitingDialog.show("Building File<br/>...this could take a minute", { dialogSize: "sm", progressType: "warning" });
var downloadFrame = document.createElement("IFRAME");
if (downloadFrame != null) {
downloadFrame.setAttribute("src", 'behaviorExport.aspx?exportType=html&exportMedia=' + exportMedia);
downloadFrame.style.width = "0px";
downloadFrame.style.height = "0px";
document.body.appendChild(downloadFrame);
}
}
The second page is downloading an Excel file using the following code snippet:
//*****************************************
//* Workbook Download & Cleanup
//*****************************************
MemoryStream stream = new MemoryStream();
wb.Write(stream);
stream.Dispose();
var xlsBytes = stream.ToArray();
string filename = "Behavior Stats YTD.xlsx";
MemoryStream newStream = new MemoryStream(xlsBytes);
if (Page.PreviousPage != null)
{
HiddenField exp = (HiddenField)Page.PreviousPage.FindControl("hidDownloadStatus");
exp.Value = "Complete";
}
HttpContext.Current.Response.ContentType = "application/octet-stream";
HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.UTF8;
HttpContext.Current.Response.AddHeader("content-disposition", "attachment; filename=" + filename);
HttpContext.Current.Response.BinaryWrite(xlsBytes);
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.End();
As you can see, I was hoping to update a hidden field on the calling page before pushing the download through; however, PreviousPage is null. Is there another approach I can use, to update the hidden field value from the calling page, behaviorAnalysis.aspx?
I would first recommend this jQuery library, which does exactly what you want with the background download and modal, but it's been tested cross-browser and has a lot of neato features already available: http://johnculviner.com/jquery-file-download-plugin-for-ajax-like-feature-rich-file-downloads/
If you don't like that, you can take a similar approach to what that plugin does. Instead of trying to set the value of a HiddenField on the parent, just add a cookie:
Response.SetCookie(new HttpCookie("fileDownload", "true") { Path = "/" });
After your first page appends the iFrame, just use setInterval() to check to see if that cookie exists with something like:
if (document.cookie.indexOf('fileDownload') > -1) {
document.cookie = 'fileDownload=; Path=/; Expires=Thu, 01 Jan 1970 00:00:01 GMT;' // remove cookie
// it was a success, so do something here
}
Of course, you'll want to put some sort of a timeout or logic to handle errors, but that should cover the basics of it.
Is there any reason why you have to offer the file on the cliend side by creating an Iframe? May you can call directly your server side method on the same page.
Another possibility would be to subscripe the iframe onload event and setting there the hidden value (client side).
'<iframe src="....." onload="downloadComplete()"></iframe>';
In second Page add this:-
<%# PreviousPageType VirtualPath ="~/PreviousPage.aspx" %>
Then try and let me know if this works.
I have been looking for a way to call the javascript function from my default.aspx.cs file...after reading some questions here, i discovered this approach(Call JavaScript function from C#)
However I need to use the returned value from javascript function in my .net code.
I need to grab users input on canvas and save it to an image and append that image to a pdf file.
Here is the js function:
getSignatureImage: function () {
var tmpCanvas = document.createElement('canvas')
, tmpContext = null
, data = null
tmpCanvas.style.position = 'absolute'
tmpCanvas.style.top = '-999em'
tmpCanvas.width = element.width
tmpCanvas.height = element.height
document.body.appendChild(tmpCanvas)
if (!tmpCanvas.getContext && FlashCanvas)
FlashCanvas.initElement(tmpCanvas)
tmpContext = tmpCanvas.getContext('2d')
tmpContext.fillStyle = settings.bgColour
tmpContext.fillRect(0, 0, element.width, element.height)
tmpContext.lineWidth = settings.penWidth
tmpContext.strokeStyle = settings.penColour
drawSignature(output, tmpContext)
data = tmpCanvas.toDataURL.apply(tmpCanvas, arguments)
document.body.removeChild(tmpCanvas)
tmpCanvas = null
return data
}
Here is the pdf generating code in my default.aspx.cs:
document.SetMargins(20f, 20f, 85f, 20f);
long milliseconds2 = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;
//Document document = new Document();
var output = new FileStream(Server.MapPath("~/PDFs/Test-File-" + milliseconds2 + ".pdf"), FileMode.Create);
pdfUrlLink = "Test-File-" + milliseconds2 + ".pdf";
var writer = PdfWriter.GetInstance(document, output);
// the image we're using for the page header
iTextSharp.text.Image imageHeader = iTextSharp.text.Image.GetInstance(Request.MapPath(
"~/Images/pdfHeader.jpg"
));
// instantiate the custom PdfPageEventHelper
MyPageEventHandler ex = new MyPageEventHandler()
{
ImageHeader = imageHeader
};
// and add it to the PdfWriter
writer.PageEvent = ex;
document.Open();
createContent();
document.Close();
submitEmail();
}
}
This is the modified code from other questions to integrate both world together:
string SignData = null;
Page page = HttpContext.Current.CurrentHandler as Page;
page.ClientScript.RegisterStartupScript(typeof(Page), "Test", "<script type='text/javascript'>" + SignData +"=getSignatureImage();</script>");
It doesn't work...Any help or input will be appreciated!
You don't want to call the javascript function from your C# code. What you want to do is post the data from your javascript function to your web app so your C# code can handle it.
The client HTML where the canvas is would have a button that when clicked calls your javascript to get the dataUrl from the canvas and puts the it in a form field - like a hidden input or something - and then submits the form.
Your C# code would pull the value out of the Request.Form collection. The value is a Base64 encoded image so you'll need to convert it to a bitmap and then use it in your PDF output.
Here's a quick fiddle that demonstrates: http://jsfiddle.net/LSRYG/
function submitImage(){
var url = document.getElementById('canv').toDataURL();
//this will show you the image data but you won't actually want to do this. It's just for the demo
alert(url);
//put it in the hidden imput and submit the form.
document.getElementById('canvImg').value = url;
document.getElementById('theform').submit();
}
I'm sure you can find out how to convert the dataURL string to an image in your C# code.
What you found is how you can run tell the browser to run Javascript on the page load within an ASP.NET application.
What I think you're looking for is a Javascript compiler within .NET. A quick google led me to this link:
Embedding JavaScript engine into .NET
can you change whatever control causes the postback to just run some javascript? Then you can just call a __dopostback with getSignatureImage() as the event argument and figure it out from there.
Basically have something with an onclick="SendEmail();" with function SendEmail{ _doPostBack([controlClientID], getSignatureImage()); } then on your pageload/ifpostback check for __EVENTTARGET(first param) == whatever client id, and if it matches, do your processing of __EVENTARGUMENT(second param) which will be your getSignatureImage() results
either that or make a webmethod or service to send the email with the getSignatureImage() results
I'm using ASP.net/C# and trying to build an image gallery. My images are stored as byte data in the database and im using an axd file like so, getDocument.axd?attachmentID= X, to set an Image object which is then added to the aspx page on page load.
In IE most of the images are rendered to the page however certain images arn't rendered i get the default red x image. Interestingly when i view the properties of the image it does not have a file type. The files that im retrieving are all jpg's.
I hope someone can help because this is a real head scratcher :)
I must note that this issue does not occur in firefox/chrome and all images render correctly.
void IHttpHandler.ProcessRequest(HttpContext context)
{
if (context.Request.QueryString["attid"] != null)
{
int attid = int.Parse(context.Request.QueryString["attid"]);
context.Response.Clear();
context.Response.AddHeader("Content-Length", att.AttachmentData.Length.ToString());
context.Response.ContentType = att.MimeType.MimeHeader;
//context.Response.CacheControl = "no-cache";
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + att.FileName.Replace(" ", "_") + "." + att.MimeType.FileExtension + ";");
context.Response.OutputStream.Write(att.AttachmentData, 0, att.AttachmentData.Length);
context.Response.End();
return;
}
}
In order to call this method i get a List of ids from the db and pull back the corresponding images doing the following;
foreach (int i in lstImages)
{
Image tempImage = new Image();
Panel pnl = new Panel();
tempImage.ImageUrl = "getDocument.axd?attid=" + i;
tempImage.Attributes.Add("onclick", "javascript:populateEditor(" + i + ");");
tempImage.Height = 100;
tempImage.Width = 100;
pnl.Controls.Add(tempImage);
divImages.Controls.Add(tempImage);
}
* EDIT *
A colleague of mine noticed that some of my images had strange header information contained in the image file. We suspect that this might be from photoshop saving files as all files which have not been created from a specific person seem to display fine.
Having done this myself I've never encountered this problem. Does this occur for the same image(s) or is it semi-random?
Check the jpegs are viewable in IE normally (i.e. as a source file not through your handler), check the HTTP traffic with fiddler, and check the bytestream going out looks good.
I'm making a C# image gallery for a website (I know there's many free ones out there, but I want the experience). I'm grabbing files from a directory on the website by storing them in a array.
protected void Page_Load(object sender, EventArgs e)
{
string[] files = null;
files = Directory.GetFiles(Server.MapPath(#"Pictures"),"*.jpg");
I then am creating an array of Imagebuttons (which I will use as thumb-nails) and I'm dynamically adding them into a panel on the web form. However, the image buttons are added on the form correctly, but the pictures show the little square/circle/triangle symbol and fail to load the actual image.
ImageButton[] arrIbs = new ImageButton[files.Length - 1];
for (int i = 0; i < files.Length-1; i++)
{
arrIbs[i] = new ImageButton();
arrIbs[i].ID = "imgbtn" + Convert.ToString(i);
arrIbs[i].ImageUrl = Convert.ToString(files[i]);
Response.Write(Convert.ToString(files[i]) + "**--**");
arrIbs[i].Width = 160;
arrIbs[i].Height = 100;
arrIbs[i].BorderStyle = BorderStyle.Inset;
//arrIbs[i].BorderStyle = 2;
arrIbs[i].AlternateText = System.IO.Path.GetFileName(Convert.ToString(files[i]));
arrIbs[i].PostBackUrl = "default.aspx?Img=" + Convert.ToString(files[i]);
pnlThumbs.Controls.Add(arrIbs[i]);
}
}
This may or may not be related to the issue (if not related, it is a sub-question). When setting the Server.MapPath() to #"~/Gallery/Pictures" (which is where the directory is in relevance to the site root) I get an error. It states that "C:/.../.../.../... could not be found" The web-site only builds if I set the directory as "Pictures", which is where the pictures are, and "Pictures" is in the same folder as "Default.aspx" which the above code is at. I never have much luck with the ~ (tilda) character. Is this a file-structure issue, or a IIS issue?
The problem with this is that you're setting a path on the server as the image button source. The browser will try to load these images from the client's machine, hence they cannot load. You will also need to make sure that the ASPNET user on the server has permissions to that folder.
What you need to do is to serve the jpeg's streams as the source for the image buttons.
You could have an aspx page which takes in the path in a query string parameter and loads the file and serves it.
Eg, have a page called GetImage.aspx as such:
<%# Page Language="C#" %>
<%# Import Namespace="System.IO" %>
<script runat="server" language="c#">
public void Page_Load()
{
try
{
Response.Clear();
Response.ContentType = "image/jpeg";
string filename = Page.Request.QueryString["file"];
using (FileStream stream = new FileStream(filename, FileMode.Open))
{
int streamLength = (int)stream.Length;
byte[] buffer = new byte[streamLength];
stream.Read(buffer, 0, streamLength);
Response.BinaryWrite(buffer);
}
}
finally
{
Response.End();
}
}
</script>
and now when you create your ImageButtons, this should be your ImageUrl:
arrIbs[i].ImageUrl = String.Format("GetImage.aspx?file={0}", HttpUtility.UrlEncode(files[i]));