How to read picture from URL and show it on my page - c#

I have a sql table which holds information:
id (hash)
imagename string
width int
height int
What is the best way to create .net image read which will show images in page. I would like to call it like image.aspx/ashx?id=[id] and function will try to catch and show that image.
I know how to get data from SQL but I dont know how to read img from URL and show it as image.
Could any please point me at some relevant information how to do it or show piece of code how it works?
Do I read it as stream?
Thanks

string imageFileName = "thefile.jpg";
context.Request.MapPath(#"IMAGES\" + context.Request.QueryString["id"]);
context.Response.ContentType = "image/jpeg";
context.Response.WriteFile(imageFileName);
context.Response.Flush();
context.Response.Close();
http://blogs.msdn.com/b/alikl/archive/2008/05/02/asp-net-performance-sin-serving-images-dynamically-or-another-reason-to-love-fiddler.aspx
http://msdn.microsoft.com/en-us/library/ms973917.aspx

Check out this article: http://aspnet-cookbook.info/O.Reilly-ASP.NET.Cookbook.Second.Edition/0596100647/aspnetckbk2-CHP-20-SECT-2.html
You'll want to create an HttpHandler class and wire that up in your web.config.

You can retrieve remote resources (such as images) via HTTP using the System.Net.WebRequest class.
WebRequest request = WebRequest.Create("http://www.doesnotexist.com/ghost.png");
WebResponse response = request.GetResponse();
Stream stream = response.GetResponseStream();
BinaryReader reader = new BinaryReader(stream)
byte[] imageBytes = reader.ReadBytes(stream.Length);
Note that there might be better ways to read the bytes from the Stream. You should also remember to add using statements where appropriate to properly dispose of any unmanaged resources.

Related

How to dispaly tiff images to image in ASP.net C#?

Im using visual studio 2008 and i want to convert .tiff file and show it to img. I can display the image using the url get from website. But when im using the path url from server it say that the parameter is not valid. I search all in the internet but cant find a solution that could fix it. Hope you could help me. Thanks in advance.
Heres my code.
string filename = "";
file_name = "https://support.leadtools.com/SupportPortal/CS/forums/44475/PostAttachment.aspx"; (This is the link i get from website. It successfully display the image)
// but when im using this to get the tiff it says parameter is not valid. The path i show below is just an example
filename = "http://123.456.7.89:00/test/test.tiff";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(file_name);
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream s = response.GetResponseStream();
Bitmap bm = new Bitmap(s);
I used to get this same kind of issue in my application once, in my case it was caused due to larger file size which wasn't properly loaded in memory at the time of accessing it. Please check the size of file you are trying to display.Hope this
helps.

Download file directly to memory

I would like to load an excel file directly from an ftp site into a memory stream. Then I want to open the file in the FarPoint Spread control using the OpenExcel(Stream) method. My issue is I'm not sure if it's possible to download a file directly into memory. Anyone know if this is possible?
Yes, you can download a file from FTP to memory.
I think you can even pass the Stream from the FTP server to be processed by FarPoint.
WebRequest request = FtpWebRequest.Create("ftp://asd.com/file");
using (WebResponse response = request.GetResponse())
{
Stream responseStream = response.GetResponseStream();
OpenExcel(responseStream);
}
Using WebClient you can do nearly the same. Generally using WebClient is easier but gives you less configuration options and control (eg.: No timeout setting).
WebClient wc = new WebClient();
using (MemoryStream stream = new MemoryStream(wc.DownloadData("ftp://asd.com/file")))
{
OpenExcel(stream);
}
Take a look at WebClient.DownloadData. You should be able to download the file directory to memory and not write it to a file first.
This is untested, but something like:
var spreadSheetStream
= new MemoryStream(new WebClient().DownloadData(yourFilePath));
I'm not familiar with FarPoint though, to say whether or not the stream can be used directly with the OpenExcel method. Online examples show the method being used with a FileStream, but I'd assume any kind of Stream would be accepted.
Download file from URL to memory.
My answer does not exactly show, how to download a file for use in Excel, but shows how to create a generic-purpose in-memory byte array.
private static byte[] DownloadFile(string url)
{
byte[] result = null;
using (WebClient webClient = new WebClient())
{
result = webClient.DownloadData(url);
}
return result;
}

Get Size of Image File before downloading from web

I am downloading image files from web using the following code in my Console Application.
WebClient client = new WebClient();
client.DownloadFile(string address_of_image_file,string filename);
The code is running absolutely fine.
I want to know if there is a way i can get the size of this image file before I download it.
PS- Actually I have written code to make a crawler which moves around the site downloading image files. So I doesn't know its size beforehand. All I have is the complete path of file which has been extracted from the source of webpage.
Here is a simple example you can try
if you have files of different extensions like .GIF, .JPG, etc
you can create a variable or wrap the code within a Switch Case Statement
System.Net.WebClient client = new System.Net.WebClient();
client.OpenRead("http://someURL.com/Images/MyImage.jpg");
Int64 bytes_total= Convert.ToInt64(client.ResponseHeaders["Content-Length"])
MessageBox.Show(bytes_total.ToString() + " Bytes");
If the web-service gives you a Content-Length HTTP header then it will be the image file size. However, if the web-service wants to "stream" data to you (using Chunk encoding), then you won't know until the whole file is downloaded.
You can use this code:
using System.Net;
public long GetFileSize(string url)
{
long result = 0;
WebRequest req = WebRequest.Create(url);
req.Method = "HEAD";
using (WebResponse resp = req.GetResponse())
{
if (long.TryParse(resp.Headers.Get("Content-Length"), out long contentLength))
{
result = contentLength;
}
}
return result;
}
You can use an HttpWebRequest to query the HEAD Method of the file and check the Content-Length in the response
You should look at this answer: C# Get http:/…/File Size where your question is fully explained. It's using HEAD HTTP request to retrieve the file size, but you can also read "Content-Length" header during GET request before reading response stream.

Problem while working with Jquery colorbox and dynamic images that reading via Aspx

for showing full size images on my site I've decided to use Jquery.colorbox , this pluging works well with static image location like :
<a rel="ex1" href="http://www.blah.com/image.jpg"><img src="http://www.blah.com/image_thumb.jpg"/></a>
but when I want to get the images from a directiry using binary read/write this plugin showing me garbage data not a compiled jpg/image like following :
<a rel="ex1" href="http://www.blah.com/getimage.aspx?id=1234"><img src="http://www.blah.com/getimage.aspx?id=1234"/></a>
and here is my snippet code for getting dynamic image :
thumbLocation = DataHelper.GetItemPicture(recordID);
using (FileStream IMG = new FileStream(thumbLocation, FileMode.Open))
{
//FileStream IMG = new FileStream(thumbLocation, FileMode.Open);
byte[] buffer = new byte[IMG.Length];
IMG.Read(buffer, 0, (int)IMG.Length);
Response.Clear();
Response.ContentType = "image/JPEG";
Response.AddHeader("Content-Length", buffer.Length.ToString());
Response.BinaryWrite(buffer);
Response.End();}
how can I fix the problem?
Use colorbox's photo property. Example:
$('a.example').colorbox({photo:true});
The reason is that colorbox's regex to auto-detect image URLs is going to fail for that kind of URL (doesn't contain an image-type extension).
Some ideas
Change the content type to "image/jpeg" (The caps might matter)
Add the following to the end of the url &thisisan.jpg (Some browsers will not create an image if they don't see this at the end of the url)
Test by putting the image url directly into the browser.

Can I display a PDF, but not allow linking to it in a website?

I have a website that has a bunch of PDFs that are pre-created and sitting on the webserver.
I don't want to allow a user to just type in a URL and get the PDF file (ie http://MySite/MyPDFFolder/MyPDF.pdf)
I want to only allow them to be viewed when I load them and display them.
I have done something similar before. I used PDFSharp to create a PDF in memory and then load it to a page like this:
protected void Page_Load(object sender, EventArgs e)
{
try
{
MemoryStream streamDoc = BarcodeReport.GetPDFReport(ID, false);
// Set the ContentType to pdf, add a header for the length
// and write the contents of the memorystream to the response
Response.ContentType = "application/pdf";
Response.AddHeader("content-length", Convert.ToString(streamDoc.Length));
Response.BinaryWrite(streamDoc.ToArray());
//End the response
Response.End();
streamDoc.Close();
}
catch (NullReferenceException)
{
Communication.Logout();
}
}
I tried to use this code to read from a file, but could not figure out how to get a MemoryStream to read in a file.
I also need a way to say that the "/MyPDFFolder" path is non-browsable.
Thanks for any suggestions
To load a PDF file from the disk into a buffer:
byte [] buffer;
using(FileStream fileStream = new FileStream(Filename, FileMode.Open))
{
using (BinaryReader reader = new BinaryReader(fileStream))
{
buffer = reader.ReadBytes((int)reader.BaseStream.Length);
}
}
Then you can create your MemoryStream like this:
using (MemoryStream msReader = new MemoryStream(buffer, false))
{
// your code here.
}
But if you already have your data in memory, you don't need the MemoryStream. Instead do this:
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Length", buffer.Length.ToString());
Response.BinaryWrite(buffer);
//End the response
Response.End();
streamDoc.Close();
Anything that is displayed on the user's screen can be captured. You might protect your source files by using a browser-based PDF viewer, but you can't prevent the user from taking snapshots of the data.
As far as keeping the source files safe...if you simply store them in a directory that is not under your web root...that should do the trick. Or you can use an .htaccess file to restrict access to the directory.
Keltex's code works for limiting who can get to the file. If the user isn't authorized for a particular file, give them a page with an error message, otherwise use that code to relay them the PDF. The URL then won't be directly to a PDF, but rather a script, so that will give you 100% control over who is permitted to access it.
Rather than putting the PDFs in question in an accessible location and messing with the configuration to hide them, you could put them someplace in the server that isn't directly web accessible. Since you'll have code reading the file into a buffer and relaying it to the user anyway, it doesn't matter where on the server the file is located, so long as it is accessible to your code.

Categories