C# Image Gallery - Image Button Not Showing - c#

I'm making a C# image gallery for a website (I know there's many free ones out there, but I want the experience). I'm grabbing files from a directory on the website by storing them in a array.
protected void Page_Load(object sender, EventArgs e)
{
string[] files = null;
files = Directory.GetFiles(Server.MapPath(#"Pictures"),"*.jpg");
I then am creating an array of Imagebuttons (which I will use as thumb-nails) and I'm dynamically adding them into a panel on the web form. However, the image buttons are added on the form correctly, but the pictures show the little square/circle/triangle symbol and fail to load the actual image.
ImageButton[] arrIbs = new ImageButton[files.Length - 1];
for (int i = 0; i < files.Length-1; i++)
{
arrIbs[i] = new ImageButton();
arrIbs[i].ID = "imgbtn" + Convert.ToString(i);
arrIbs[i].ImageUrl = Convert.ToString(files[i]);
Response.Write(Convert.ToString(files[i]) + "**--**");
arrIbs[i].Width = 160;
arrIbs[i].Height = 100;
arrIbs[i].BorderStyle = BorderStyle.Inset;
//arrIbs[i].BorderStyle = 2;
arrIbs[i].AlternateText = System.IO.Path.GetFileName(Convert.ToString(files[i]));
arrIbs[i].PostBackUrl = "default.aspx?Img=" + Convert.ToString(files[i]);
pnlThumbs.Controls.Add(arrIbs[i]);
}
}
This may or may not be related to the issue (if not related, it is a sub-question). When setting the Server.MapPath() to #"~/Gallery/Pictures" (which is where the directory is in relevance to the site root) I get an error. It states that "C:/.../.../.../... could not be found" The web-site only builds if I set the directory as "Pictures", which is where the pictures are, and "Pictures" is in the same folder as "Default.aspx" which the above code is at. I never have much luck with the ~ (tilda) character. Is this a file-structure issue, or a IIS issue?

The problem with this is that you're setting a path on the server as the image button source. The browser will try to load these images from the client's machine, hence they cannot load. You will also need to make sure that the ASPNET user on the server has permissions to that folder.
What you need to do is to serve the jpeg's streams as the source for the image buttons.
You could have an aspx page which takes in the path in a query string parameter and loads the file and serves it.
Eg, have a page called GetImage.aspx as such:
<%# Page Language="C#" %>
<%# Import Namespace="System.IO" %>
<script runat="server" language="c#">
public void Page_Load()
{
try
{
Response.Clear();
Response.ContentType = "image/jpeg";
string filename = Page.Request.QueryString["file"];
using (FileStream stream = new FileStream(filename, FileMode.Open))
{
int streamLength = (int)stream.Length;
byte[] buffer = new byte[streamLength];
stream.Read(buffer, 0, streamLength);
Response.BinaryWrite(buffer);
}
}
finally
{
Response.End();
}
}
</script>
and now when you create your ImageButtons, this should be your ImageUrl:
arrIbs[i].ImageUrl = String.Format("GetImage.aspx?file={0}", HttpUtility.UrlEncode(files[i]));

Related

C# download button folder path

This question was migrated from Super User because it can be answered on Stack Overflow.
Migrated 14 days ago.
This is a C# code for a Web Form page named "upload.aspx" which allows the user to upload a PDF file and store it in the server. The file uploaded must be a PDF file, and all the text boxes must be filled before submission. The file name is saved in the database as a string with the format of "GUID.pdf", Guid.NewGuid().ToString().
tTR.FileName = Guid.NewGuid().ToString() + ".pdf";
dataBaseDataContext.TTRs.InsertOnSubmit(tTR);
dataBaseDataContext.SubmitChanges();
String path = Server.MapPath("Attachments/" + tTR.FileName);
FileUploadtoServer.SaveAs(path);
Response.Write("<script>alert('Successfully Inserted!')</script>");
Invoice.Text = "";
Manufacture.Text = "";
HeatCode.Text = "";
Description.Text = "";
PO_Number.Text = "";
i created a search function that allows the user to search for based on Heat Code, Item Code, PO Number, and Description. he user can also edit and download the file by clicking on the "Edit / Download" link. The problem here is in my Download button. It's not downloading/ able to read the path. i can see the pdf exist in the Attachment folder but it's not able to find the correct file name associated to the heat code.
private void BindData(List<TTR> Data)
{
try
{
if (Data.Count > 0)
{
StringBuilder append = new StringBuilder();
foreach (TTR tTR in Data)
{
string PdfGuid = tTR.FileName;
append.Append("
}
tdList.InnerHtml = append.ToString();
}
}
catch (Exception)
{
}
protected void Button1_Click(object sender, EventArgs e)
{
try
{
string filePath = Server.MapPath("~/Attachments/" + filename + ".pdf");
byte[] content = File.ReadAllBytes(filePath);
Response.Clear();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", "attachment; filename=" + filename + ".pdf");
Response.Buffer = true;
Response.BinaryWrite(content);
Response.End();
}
catch (Exception)
{
This is a C# code for a web form page named "upload.aspx" that allows a user to upload a PDF file, store it on the server, and save its file name in the database as a string with the format "GUID.pdf", where GUID is a unique identifier generated using Guid.NewGuid().ToString(). Before submission, all the text boxes must be filled, and the uploaded file must be a PDF.
The code also includes a search function that allows the user to search for files based on Heat Code, Item Code, PO Number, and Description. The user can also edit and download the file by clicking on the "Edit / Download" link. However, the download button is encountering issues as it is not able to find the correct file name associated with the heat code and download the file.
The problem seems to be with the code that retrieves the file path, reads the contents of the file, sets the content type as "application/pdf", and writes the contents to the response stream. The code is trying to read the file from the server's "Attachments" folder using the file path, but it seems to be encountering issues finding the correct file.

Copy a visio page to a new document

What I want to accomplish:
I want to copy the active page in my Visio application to a new document and save it (and make it a byte[] for the db), I am already doing this but in a slightly "wrong" way as there is too much interaction with the Visio application.
Method to copy page to byte array:
private static byte[] VisioPageToBytes()
{
//Make a new invisible app to dump the shapes in
var app = new InvisibleApp();
Page page = MainForm.IVisioApplication.ActivePage;
app.AlertResponse = 2;
//Selact all shapes and copy, then deselect
MainForm.IVisioApplication.ActiveWindow.SelectAll();
MainForm.IVisioApplication.ActiveWindow.Selection.Copy();
MainForm.IVisioApplication.ActiveWindow.DeselectAll();
//Add empty document to invisible app and dump shapes
app.Documents.Add( string.Empty );
app.ActivePage.Paste();
//Save document and convert to byte[]
app.ActiveDocument.SaveAs( Application.UserAppDataPath + #"/LastStored.vsd" );
app.ActiveDocument.Close();
app.Quit();
app.AlertResponse = 0;
var bytes = File.ReadAllBytes( Application.UserAppDataPath + #"/LastStored.vsd" );
Clipboard.Clear();
return bytes;
}
Why it's wrong:
This code makes selections in the visio page and has to open an invisible window to store the page. I'm looking for a way with less interaction with the Visio application (as its unstable). The opening of the 2nd (invisible) Visio application occasionally makes my main Visio application crash.
I would like to do something like:
Page page = MainForm.IVisioApplication.ActivePage;
Document doc;
doc.Pages.Add( page ); //Pages.Add has no parameters so this doesn't work
doc.SaveAs(Application.UserAppDataPath + #"/LastStored.vsd");
If this is not possible in a way with less interaction (by "building" the document), please comment to let me know.
TL;DR;
I wan't to make a new Visio document without opening Visio and copy (the content of) 1 page to it.
If you want to create a copy page then you might find the Duplicate method on Page handy, but by the sounds of it just save the existing doc should work:
void Main()
{
var vApp = MyExtensions.GetRunningVisio();
var sourcePage = vApp.ActivePage;
var sourcePageNameU = sourcePage.NameU;
var vDoc = sourcePage.Document;
vDoc.Save(); //to retain original
var origFileName = vDoc.FullName;
var newFileName = Path.Combine(vDoc.Path, $"LastStored{Path.GetExtension(origFileName)}");
vDoc.SaveAs(newFileName);
//Remove all other pages
for (short i = vDoc.Pages.Count; i > 0; i--)
{
if (vDoc.Pages[i].NameU != sourcePageNameU)
{
vDoc.Pages[i].Delete(0);
}
}
//Save single page state
vDoc.Save();
//Close copy and reopen original
vDoc.Close();
vDoc = vApp.Documents.Open(origFileName);
}
GetRunningVisio is my extension method for using with LinqPad:
http://visualsignals.typepad.co.uk/vislog/2015/12/getting-started-with-c-in-linqpad-with-visio.html
...but you've already got a reference to your app so you can use that instead.
Update based on comments:
Ok, so how about this modification of your original code? Note that I'm creating a new Selection object from the page but not changing the Window one, so this shouldn't interfere with what the user sees or change the source doc at all.
void Main()
{
var vApp = MyExtensions.GetRunningVisio();
var sourcePage = vApp.ActivePage;
var sourceDoc = sourcePage.Document;
var vSel = sourcePage.CreateSelection(Visio.VisSelectionTypes.visSelTypeAll);
vSel.Copy(Visio.VisCutCopyPasteCodes.visCopyPasteNoTranslate);
var copyDoc = vApp.Documents.AddEx(string.Empty,
Visio.VisMeasurementSystem.visMSDefault,
(int)Visio.VisOpenSaveArgs.visAddHidden);
copyDoc.Pages[1].Paste(Visio.VisCutCopyPasteCodes.visCopyPasteNoTranslate);
var origFileName = sourceDoc.FullName;
var newFileName = Path.Combine(sourceDoc.Path, $"LastStored{Path.GetExtension(origFileName)}");
copyDoc.SaveAs(newFileName);
copyDoc.Close();
}
Note that this will only create a default page so you might want to include copying over page cells such as PageWidth, PageHeight, PageScale and DrawingScale etc. prior to pasting.

PDF opens when application is run locally, will not open when pushed to dev server

I have a grid that lists the uploaded documents for each record in another table. When I click View in the grid, while the app is on my local machine, it opens the pdf from sql server with no problem. When I push it over to the dev server and I click View, it kind of just freezes up the application for a minute or so. I'm not quite sure what's going on here, though I suspect that the method is possibly trying to run the pdf ON the server instead of my machine?
protected void UploadedDocumentsRadGrid_ItemCommand(object sender, GridCommandEventArgs e)
{
if(e.CommandName == "ViewDoc")
{
if(e.Item is GridDataItem)
{
GridDataItem item = (GridDataItem)e.Item;
var doc = from d in db.UploadedDocuments
where d.ID.ToString() == item["ID"].Text
select d;
foreach(var Doc in doc)
{
string filePath = Path.GetTempFileName();
File.Move(filePath, Path.ChangeExtension(filePath, ".pdf"));
filePath = Path.ChangeExtension(filePath, ".pdf");
File.WriteAllBytes(filePath, Doc.DocumentData.ToArray());
OpenPDFFile(filePath);
}
}
}
}
protected void OpenPDFFile(string filePath)
{
using(System.Diagnostics.Process p = new System.Diagnostics.Process())
{
p.StartInfo = new System.Diagnostics.ProcessStartInfo(filePath);
p.Start();
p.WaitForExit();
try
{
File.Delete(filePath);
}
catch { }
}
}
Further Explanation:
This application allows users to upload scanned documents into a SQL table. If the user needs to view an uploaded document, they should be able to click on that document in the grid and that document should then open on their local machine. Am I not going about this the right way?
UPDATE:
Everything is working as needed now. A big thank you to Sunil for the code they provided for me. I did have to change the SQL connection to a LINQ to SQL statement, which was no big deal. The final code is below:
var doc = from d in db.UploadedDocuments
where d.ID.ToString() == Session["ID"].ToString()
select d;
foreach (var Doc in doc)
{
byte[] bytes = Doc.DocumentData.ToArray();
this.Page.Response.Buffer = true;
this.Page.Response.Charset = "";
this.Page.Response.ClearContent();
if (this.Page.Request.QueryString["download"] == "1")
{
this.Page.Response.AppendHeader("Content-Disposition", "attachment; filename=PDF.pdf");
}
this.Page.Response.Cache.SetCacheability(HttpCacheability.NoCache);
this.Page.Response.ContentType = "application/pdf";
this.Page.Response.BinaryWrite(bytes);
this.Page.Response.Flush();
this.Page.Response.End();
As suggested by Paddy you need to set the mime type from your code.
In your case, if filePath points to a file like c:\myfiles\pdfs\abc.pdf then you can use the first code snippet, but if it the filePath is like ~/files/abc.pdf i.e. the pdf file is stored somewhere under the website root folder then use the second code snippet. I am not sure why you would like to delete the file after it's opened in a browser.
When filePath is an absolute or UNC path
protected void OpenPDFFile(string filePath)
{
//set the appropriate ContentType.
Response.ContentType = "Application/pdf";
//write the file to http content output stream.
Response.WriteFile(filePath);
Response.End();
}
When filePath is an web path like ~/mypfile.pdf
protected void OpenPDFFile(string filePath)
{
//set the appropriate ContentType.
Response.ContentType = "Application/pdf";
//get the absolute file path
filePath = MapPath(filePath);
//write the file to http content output stream.
Response.WriteFile(filePath);
Response.End();
}
UPDATE 1
From what you are saying, you want to delete files on a user's local computer from a remote web server since the web app code executes in a remote server. This is absolutely not possible, and even if it was, it would be a BIG security risk since a remote computer would then be controlling an end-user's computer. So, I suggest you follow the normal practice in a web app for streaming files to an end-user's computer.
If you have files stored in database then you could use code below to open a pdf file on end-user's computer. The end user would click on the file link in your gridview and the link click code on server-side would then execute to stream the pdf file to user's computer. Note that the link for file in your gridview should be a link button with a command argument equal to uploadfileId column value for that upload.
I have assumed that in your database there is a table Uploads that has these columns - UploadId, FileData, FileName, FileContentType with UploadId being an auto-incrementing primary key.
Markup for Link in GridView that will download pdf when clicked
<asp:TemplateField ItemStyle-HorizontalAlign="Center">
<ItemTemplate>
<asp:LinkButton ID="lnkViewPdfFile" runat="server" Text="View Pdf" OnClick="ViewPdfFile" CommandArgument='<%# Eval("UploadId") %>'></asp:LinkButton>
</ItemTemplate>
</asp:TemplateField>
Click event for above file LinkButton in code-behind
protected void ViewPdfFile(object sender, EventArgs e)
{
int uploadId = int.Parse(btn.CommandArgument);
byte[] bytes;
string fileName, contentType;
string conString = ConfigurationManager.ConnectionStrings["appdatabase"].ConnectionString;
using (SqlConnection con = new SqlConnection(conString))
{
using (SqlCommand cmd = new SqlCommand())
{
cmd.CommandText = "SELECT FileName, FileData, ContentType FROM Uploads WHERE UploadId=#uploadId";
cmd.Parameters.AddWithValue("#uploadId", uploadId);
cmd.Connection = con;
con.Open();
using (SqlDataReader sdr = cmd.ExecuteReader())
{
sdr.Read();
bytes = (byte[])sdr["FileData"];
contentType = sdr["ContentType"].ToString();
fileName = sdr["FileName"].ToString();
}
con.Close();
}
}
context.Response.Buffer = true;
context.Response.Charset = "";
if (context.Request.QueryString["download"] == "1")
{
context.Response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName);
}
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
context.Response.ContentType = "application/pdf";
context.Response.BinaryWrite(bytes);
context.Response.Flush();
context.Response.End();
}
You are writing server side code. You may find that there are a number of open PDF's on your web server (although your generic 'catch and swallow' handling may be masking any problems there). You need to stream the file data to your client, with the appropriate mime type.
It works locally, as your web server is the same as the machine on which you open the browser when developing.
Hard to exactly duplicate your code, as you are opening multiple PDF files - you may find it better to open these in another window rather than serving them directly on postback like this.

Letting the Javascript finish before rendering pdf in ABC pdf

I'm trying to make a pdf of a web page that is displaying locations on Google Maps. The only problem is that the Javascript isn't quite completing by the time that ABCpdf renders the pdf. It's incomplete. How can I make ABDpdf wait until the javascript is 100% complete before the pdf is rendered. Here is what I've tried so far.
Doc theDoc = new Doc();
string theURL = url;
// Set HTML options
theDoc.HtmlOptions.AddLinks = true;
theDoc.HtmlOptions.UseScript = true;
theDoc.HtmlOptions.PageCacheEnabled = false;
//theDoc.HtmlOptions.Engine = EngineType.Gecko;
// JavaScript is used to extract all links from the page
theDoc.HtmlOptions.OnLoadScript = "var hrefCollection = document.all.tags(\"a\");" +
"var allLinks = \"\";" +
"for(i = 0; i < hrefCollection.length; ++i) {" +
"if (i > 0)" +
" allLinks += \",\";" +
"allLinks += hrefCollection.item(i).href;" +
"};" +
"document.documentElement.abcpdf = allLinks;";
// Array of links - start with base URL
theDoc.HtmlOptions.OnLoadScript = "(function(){window.ABCpdf_go = false; setTimeout(function(){window.ABCpdf_go = true;}, 1000);})();";
ArrayList links = new ArrayList();
links.Add(theURL);
for (int i = 0; i < links.Count; i++)
{
// Stop if we render more than 20 pages
if (theDoc.PageCount > 20)
break;
// Add page
theDoc.Page = theDoc.AddPage();
int theID = theDoc.AddImageUrl(links[i] as string);
// Links from the rendered page
string allLinks = theDoc.HtmlOptions.GetScriptReturn(theID);
string[] newLinks = allLinks.Split(new char[] { ',' });
foreach (string link in newLinks)
{
// Check to see if we allready rendered this page
if (links.BinarySearch(link) < 0)
{
// Skip links inside the page
int pos = link.IndexOf("#");
if (!(pos > 0 && links.BinarySearch(link.Substring(0, pos)) >= 0))
{
if (link.StartsWith(theURL))
{
links.Add(link);
}
}
}
}
// Add other pages
while (true)
{
theDoc.FrameRect();
if (!theDoc.Chainable(theID))
break;
theDoc.Page = theDoc.AddPage();
theID = theDoc.AddImageToChain(theID);
}
}
// Link pages together
theDoc.HtmlOptions.LinkPages();
// Flatten all pages
for (int i = 1; i <= theDoc.PageCount; i++)
{
theDoc.PageNumber = i;
theDoc.Flatten();
}
byte[] theData = theDoc.GetData();
Response.Buffer = false; //new
Response.Clear();
//Response.ContentEncoding = Encoding.Default;
Response.ClearContent(); //new
Response.ClearHeaders(); //new
Response.ContentType = "application/pdf"; //new
Response.AddHeader("Content-Disposition", "attachment; filename=farts");
Response.AddHeader("content-length", theData.Length.ToString());
//Response.ContentType = "application/pdf";
Response.BinaryWrite(theData);
Response.End();
theDoc.Clear();
I had a very similar problem (rendering Google Visualization as PDF) and here's the trick that I used to partially solve it:
First of all, your JavaScript needs to be executed on DOMContentLoaded rather than on load (you will understand why just in a moment). Next create an empty page that will serve content by a timer (you can just use System.Threading.Thread.Sleep to make the page "wait" for a certain amount of time).
Then place a hidden image on the page that you want to render as PDF and that contains JavaScript that needs to be executed before the PDF can be produced. The "src" attribute of an image must have a URL pointing to your timer page (in the following example I specify the delay in milliseconds via query-string):
<img src="Timer.aspx?Delay=1000" style="width: 1px; height: 1px; visibility: hidden" />
Notice that I use visibility: hidden instead of display: none to hide the image. The reason is that some browsers might not start loading the image until it's visible.
Now what will happen is that ABCpdf will wait until the image is loaded while your JavaScript will be executing already (because the DOMContentLoaded is fired before load which waits until all images are loaded).
Of course you cannot predict how much time exactly do you need to execute your JavaScript. Another thing is that if ABCpdf is unable to load page within 15 seconds (default value but I think you can change it) it will throw an exception so be careful when choosing the delay.
Hope this helps.
In my case, we were upgrading v8 to v9 and generating a thumbnail image of a webpage that also required extensive javascript CSS manipulation for object positioning. When we switched to v9, we noticed the objects were duplicated (showing in their original position and the position that they were supposed to be located in after js).
The workaround that I applied was using the RenderDelay and OneStageRender properties to change how the page rendering is handled to PDF. The 500 is ms, so 1/2 second. The bigger culprit seemed to be the OneStageRender. That had to be disabled in order for rendering to handle properly.
doc.SetInfo(0, "RenderDelay", "500")
doc.SetInfo(0, "OneStageRender", 0)
Try making your script block into a javascript function, and call that function from the document.ready() function at the top of your file. I assume you're using jQuery. The ready() function will ensure all page elements have stabilized before it calls any functions in its body.

IE bug? Using a IHttpHandler to retrieve images from database, getting random blank images

I'm using ASP.net/C# and trying to build an image gallery. My images are stored as byte data in the database and im using an axd file like so, getDocument.axd?attachmentID= X, to set an Image object which is then added to the aspx page on page load.
In IE most of the images are rendered to the page however certain images arn't rendered i get the default red x image. Interestingly when i view the properties of the image it does not have a file type. The files that im retrieving are all jpg's.
I hope someone can help because this is a real head scratcher :)
I must note that this issue does not occur in firefox/chrome and all images render correctly.
void IHttpHandler.ProcessRequest(HttpContext context)
{
if (context.Request.QueryString["attid"] != null)
{
int attid = int.Parse(context.Request.QueryString["attid"]);
context.Response.Clear();
context.Response.AddHeader("Content-Length", att.AttachmentData.Length.ToString());
context.Response.ContentType = att.MimeType.MimeHeader;
//context.Response.CacheControl = "no-cache";
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + att.FileName.Replace(" ", "_") + "." + att.MimeType.FileExtension + ";");
context.Response.OutputStream.Write(att.AttachmentData, 0, att.AttachmentData.Length);
context.Response.End();
return;
}
}
In order to call this method i get a List of ids from the db and pull back the corresponding images doing the following;
foreach (int i in lstImages)
{
Image tempImage = new Image();
Panel pnl = new Panel();
tempImage.ImageUrl = "getDocument.axd?attid=" + i;
tempImage.Attributes.Add("onclick", "javascript:populateEditor(" + i + ");");
tempImage.Height = 100;
tempImage.Width = 100;
pnl.Controls.Add(tempImage);
divImages.Controls.Add(tempImage);
}
* EDIT *
A colleague of mine noticed that some of my images had strange header information contained in the image file. We suspect that this might be from photoshop saving files as all files which have not been created from a specific person seem to display fine.
Having done this myself I've never encountered this problem. Does this occur for the same image(s) or is it semi-random?
Check the jpegs are viewable in IE normally (i.e. as a source file not through your handler), check the HTTP traffic with fiddler, and check the bytestream going out looks good.

Categories