Download of image from url - c#

I've to download the images from particular URL using , for loop in case some products having more than 5 images .. when i click the product button some of the images only getting download not all images..but when i debug the corresponding code works fine, i.e, its downloads all the images..properly .
is this due to some cache .... or image file is in Gif format....
This is My Code
for (int i = 0; i < obj.Count; i++)
{
PartNAme = (obj[i].ToString().Split('='))[1];
_prtnm = PartNAme.ToString().Split(';')[0];
//this is url , source for the image
_final_URI = _URI + _Prod_name + '/' + _prtnm + ".GIF";
WebClient client = new WebClient();
string strtempname = DateTime.Now.ToString().Replace("/", "").Replace(":", "").Replace(" ", "");
rnd = new Random(100);
string _strfile = "PNImage" + strtempname + rnd.Next().ToString() + ".gif";
string _path = "../Images/PNImage/" + _strfile;
string _PPath = Server.MapPath(_path);
//to download image from source url, and path to save the image file
client.DownloadFile(_final_URI, _PPath);
}
in above code when i debug the image files are getting downloaded properly, but while running without debug some image files gets repeated so instead of getting original image file , old/same image file gets downloaded instead...
is this due to some cache .... or image file is in Gif format....

I think that random gives you the same number. See this thread for a fix Random number generator only generating one random number
If you debug you are way "slower" that without debugging. Another option would be to use guids as filename

Related

IOException thrown from Image class or byte array

I'm extracting a ZIP file. This ZIP contains image files and an Excel file with a product list. When articles of different sizes are listed the article refers to the same image. I copy the image file to a local folder and write the (compressed) binary data to SQL server database.
So when it gets to the point where a JPG file shall be processed a second time, I get this exception, although I dispose the image object.
Worksheet ws;
string root = "C:\\images\\";
string file;
string importFolder = "C:\\import\\;
Dictionary <string, object> ins;
Image im;
Image th;
//Worksheet has been opened before
//ZIP has been extracted before to C:\import\
for (i = 2; i <= ws.Dimension.End.Row; i++) {
ins = new Dictionary<string, object>(); //Dictionary to write data to database
file = ws.Cells[i, 4].Text;
System.IO.File.Copy(importFolder + "\\" + file, root + "\\" + file, true); // <-- Here the exception is thrown in the second iteration
im = Image.FromFile(root + "\\" + file);
im = im.GetBetterThumbnail(1024);
byte[] im_data = im.GetJpgByteArray(85);
ins.Add("url", "www.test.de/images/" + file);
ins.Add("image_data", im_data);
ins.Add("image_size", im_data.Length);
//image will be written to database
im.Dispose();
im = null;
im_data = null;
//With these initializations there shouldn't be thrown an exception
} // end for
What am I missing? With resetting the Image object and byte array, there shouldn't be another reference to the image file.
I had a look on this
IOException: The process cannot access the file 'file path' because it is being used by another process
but I couldn't figure out, how to adept to my topic.
Yes, I could store all file names just to copy them once, but I think that's the lazy way.
Kind regards
You assign a value to the variable im two times.
One time you use im = Image.FromFile(root + "\\" + file) and the other time you use im = im.GetBetterThumbnail(1024). Could it be that this opens two handles that need to be closed?
Besides, it's better to use the using statement. Then you don't have to take care of the disposing by yourself.
For example like this:
for (i = 2; i <= ws.Dimension.End.Row; i++)
{
ins = new Dictionary<string, object>(); //Dictionary to write data to database
file = ws.Cells[i, 4].Text;
System.IO.File.Copy(importFolder + "\\" + file, root + "\\" + file, true);
using (im = Image.FromFile(root + "\\" + file))
{
// I guess that this method creates its own handle
// and therefore also needs to be disposed.
using (thumbnail = im.GetBetterThumbnail(1024))
{
byte[] im_data = thumbnail.GetJpgByteArray(85);
ins.Add("url", "www.test.de/images/" + file);
ins.Add("image_data", im_data);
ins.Add("image_size", im_data.Length);
//image will be written to database
}
}
} // end for
I got the issue solved by using a stream. The memory management works really better now.
New code:
//im = Image.FromFile(root + "\\" + file);
im = Image.FromStream(File.Open(root + "\\" + file, FileMode.Open));
So could it be that this is another 'Microsoft feature'?

Mime-type .tif/pdf files are corrupted and will not open .NET V2.0

Web systems (exact same site) has been migrated to new servers. The mime-type tif file attachments worked on the previous servers in production and no code has been changed, but since the migration we cannot open specifically, .tif file. PDF files spin to a blank page in the browser.
The code calls a webservice(which works fine) to get a Cache Document from a JDE environment
object[] file = docA.CacheDocument("/" + path, filename, doctype, xxx.Global.JDEEnvironment);
fileSize = (int)file[0];
mimeType = (string)file[1];
There is no issue returning the mime-type, which is a "image/tiff". Settings have been set on the server level to accept both .tif and .tiff in MIME-TYPE properties.
HttpContext.Current.Response.ClearHeaders();
HttpContext.Current.Response.ClearContent();
HttpContext.Current.Response.Buffer = true;
HttpContext.Current.Response.ContentType = mimeType;
string tempPath = "/" + path;
string tempFile = filename;
int i = 0;
while (i < fileSize)
{
int[] byteRangeAry = new int[2];
byteRangeAry[0] = i;
if ((i + _chunkSize) < fileSize)
{
byteRangeAry[1] = i + _chunkSize;
}
else
{
byteRangeAry[1] = fileSize;
}
var docdata = docA.GetByteRange(tempPath, tempFile, byteRangeAry);
HttpContext.Current.Response.BinaryWrite(docdata);
HttpContext.Current.Response.Flush();
//Move the index to the next chunk
i = byteRangeAry[1] + 1;
}
HttpContext.Current.Response.Flush();
this snipit is untouched code that worked in production and now errors out with an object reference error.
var docdata = docA.GetByteRange(tempPath, tempFile, byteRangeAry);
However when I add a .mime extention to the tempFile, it no longer errors out and gets the byteRange.
var docdata = docA.GetByteRange(tempPath, tempFile + ".mime", byteRangeAry);
The dialog box appears - downloads the file - but opens to a blank or an error saying the file appears to be damages, corrupted or is too large. I have tried opening in several other formats to no avail. This happens with the .tif file. The PDF just leaves a blank page in the brower without an option download dialog box.
This is the same code that worked in production and is a .NET V2 app. Any suggestions would be much appreciated.
This was resolved, it was a cacheing issue. We re-wrote the Get CacheDocument method that was corrupting the header. Now its a GetDocument method and we are now able to grab documents, and load them. The problem was the code, it is still strange that it worked in the previous production.

Append to file failure when executable not in same folder as data files

Problem is now solved. Mistake by me that I hadn't seen before.
I am pretty new to coding in general and am very new to C# so I am probably missing something simple. I wrote a program to pull data from a login website and save that data to files on the local hard drive. The data is power and energy data for solar modules and each module has its own file. On my main workstation I am running Windows Vista and the program works just fine. When I run the program on the machine running Server 2003, instead of the new data being appended to the files, it just overwrites the data originally in the file.
The data I am downloading is csv format text over a span of 7 days at a time. I run the program once a day to pull the new day's data and append it to the local file. Every time I run the program, the local file is a copy of the newly downloaded data with none of the old data. Since the data on the web site is only updated once a day, I have been testing by removing the last day's data in the local file and/or the first day's data in the local file. Any time I change the file and run the program, the file contains the downloaded data and nothing else.
I just tried something new to test why it wasn't working and think I have found the source of the error. When I ran on my local machine, the "filePath" variable was set to "". On the server and now on my local machine I have changed the "filePath" to #"C:\Solar Yard Data\" and on both machines it catches the file not found exception and creates a new file in the same directory which overwrites the original. Anyone have an idea as to why this happens?
The code is the section that download's each data set and appends any new data to the local file.
int i = 0;
string filePath = "C:/Solar Yard Data/";
string[] filenamesPower = new string[]
{
"inverter121201321745_power",
"inverter121201325108_power",
"inverter121201326383_power",
"inverter121201326218_power",
"inverter121201323111_power",
"inverter121201324916_power",
"inverter121201326328_power",
"inverter121201326031_power",
"inverter121201325003_power",
"inverter121201326714_power",
"inverter121201326351_power",
"inverter121201323205_power",
"inverter121201325349_power",
"inverter121201324856_power",
"inverter121201325047_power",
"inverter121201324954_power",
};
// download and save every module's power data
foreach (string url in modulesPower)
{
// create web request and download data
HttpWebRequest req_csv = (HttpWebRequest)HttpWebRequest.Create(String.Format(url, auth_token));
req_csv.CookieContainer = cookie_container;
HttpWebResponse res_csv = (HttpWebResponse)req_csv.GetResponse();
// save the data to files
using (StreamReader sr = new StreamReader(res_csv.GetResponseStream()))
{
string response = sr.ReadToEnd();
string fileName = filenamesPower[i] + ".csv";
// save the new data to file
try
{
int startIndex = 0; // start index for substring to append to file
int searchResultIndex = 0; // index returned when searching downloaded data for last entry of data on file
string lastEntry; // will hold the last entry in the current data
//open existing file and find last entry
using (StreamReader sr2 = new StreamReader(fileName))
{
//get last line of existing data
string fileContents = sr2.ReadToEnd();
string nl = System.Environment.NewLine; // newline string
int nllen = nl.Length; // length of a newline
if (fileContents.LastIndexOf(nl) == fileContents.Length - nllen)
{
lastEntry = fileContents.Substring(0, fileContents.Length - nllen).Substring(fileContents.Substring(0, fileContents.Length - nllen).LastIndexOf(nl) + nllen);
}
else
{
lastEntry = fileContents.Substring(fileContents.LastIndexOf(nl) + 2);
}
// search the new data for the last existing line
searchResultIndex = response.LastIndexOf(lastEntry);
}
// if the downloaded data contains the last record on file, append the new data
if (searchResultIndex != -1)
{
startIndex = searchResultIndex + lastEntry.Length;
File.AppendAllText(filePath + fileName, response.Substring(startIndex+1));
}
// else append all the data
else
{
Console.WriteLine("The last entry of the existing data was not found\nin the downloaded data. Appending all data.");
File.AppendAllText(filePath + fileName, response.Substring(109)); // the 109 index removes the file header from the new data
}
}
// if there is no file for this module, create the first one
catch (FileNotFoundException e)
{
// write data to file
Console.WriteLine("File does not exist, creating new data file.");
File.WriteAllText(filePath + fileName, response);
//Debug.WriteLine(response);
}
}
Console.WriteLine("Power file " + (i + 1) + " finished.");
//Debug.WriteLine("File " + (i + 1) + " finished.");
i++;
}
Console.WriteLine("\nPower data finished!\n");
Couple of suggestions wich I think will probably resolve the issue
First change your filePath string
string filePath = #"C:\Solar Yard Data\";
create a string with the full path
String fullFilePath = filePath + fileName;
then check to see if it exists and create it if it doesnt
if (!File.Exists(fullFilePath ))
File.Create(fullFilePath );
put the full path to the file in your streamReader
using (StreamReader sr2 = new StreamReader(fullFilePath))

How to store HTML with images in clipboard without leaking temporary files?

Is there any way to store HTML fragments in the Windows clipboard containing <img...> elements that reference temporary image files and to make sure that no temporary image files are left in the TEMP directory?
Here is some sample code for illustrating the problem that I am encountering:
Pressing a button in my test application executes the following code:
string tempFileName = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString()) + ".png";
CreateImage(tempFileName);
string html = string.Format("<p>[This is <b>HTML</b> with a picture:<img src=\"file://{0}\">]</p>", tempFileName);
WriteHtmlToClipboard(html);
MessageBox.Show("Wrote HTML to clipboard");
CreateImage() renders and saves an image file on the fly, like this:
private static void CreateImage(string tempFileName)
{
Bitmap b = new Bitmap(50, 50);
using (Graphics g = Graphics.FromImage(b))
{
g.DrawEllipse(Pens.Red, new RectangleF(2, 2, b.Width - 4, b.Height - 4));
Bitmap b2 = new Bitmap(b);
b2.Save(tempFileName, ImageFormat.Png);
}
}
WriteHtmlToClipboard() writes the HTML fragment to the clipboard:
private static void WriteHtmlToClipboard(string html)
{
const string prefix = "<html>head><title>HTML clipboard</title></head><body>";
const string suffix = "</body>";
const string header = "Version:0.9\r\n" +
"StartHTML:AAAAAA\r\n" +
"EndHTML:BBBBBB\r\n" +
"StartFragment:CCCCCC\r\n" +
"EndFragment:DDDDDD\r\n";
string result = header + prefix + html + suffix;
result = result.Replace("AAAAAA", header.Length.ToString("D6"))
.Replace("BBBBBB", result.Length.ToString("D6"))
.Replace("CCCCCC", (header + prefix).Length.ToString("D6"))
.Replace("DDDDDD", (header + prefix + html).Length.ToString("D6"));
Clipboard.SetText(result, TextDataFormat.Html);
}
I now have two alternatives regarding the handling of the temporary image file:
Delete the image file when the application is terminated. Problem: When I paste the HTML text from the clipboard into another application, the images are lost. Most users expect that the clipboard's content is preserved even if you close an application.
Leave the image file in the TEMP directory even after the application terminates. Problem: Who removes the image file when the clipboard's content is replaced by something else?
Of course, I could implement a helper application that runs whenever Windows boots and cleans up any temporary image file, but I would prefer a more elegant solution.

IE bug? Using a IHttpHandler to retrieve images from database, getting random blank images

I'm using ASP.net/C# and trying to build an image gallery. My images are stored as byte data in the database and im using an axd file like so, getDocument.axd?attachmentID= X, to set an Image object which is then added to the aspx page on page load.
In IE most of the images are rendered to the page however certain images arn't rendered i get the default red x image. Interestingly when i view the properties of the image it does not have a file type. The files that im retrieving are all jpg's.
I hope someone can help because this is a real head scratcher :)
I must note that this issue does not occur in firefox/chrome and all images render correctly.
void IHttpHandler.ProcessRequest(HttpContext context)
{
if (context.Request.QueryString["attid"] != null)
{
int attid = int.Parse(context.Request.QueryString["attid"]);
context.Response.Clear();
context.Response.AddHeader("Content-Length", att.AttachmentData.Length.ToString());
context.Response.ContentType = att.MimeType.MimeHeader;
//context.Response.CacheControl = "no-cache";
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + att.FileName.Replace(" ", "_") + "." + att.MimeType.FileExtension + ";");
context.Response.OutputStream.Write(att.AttachmentData, 0, att.AttachmentData.Length);
context.Response.End();
return;
}
}
In order to call this method i get a List of ids from the db and pull back the corresponding images doing the following;
foreach (int i in lstImages)
{
Image tempImage = new Image();
Panel pnl = new Panel();
tempImage.ImageUrl = "getDocument.axd?attid=" + i;
tempImage.Attributes.Add("onclick", "javascript:populateEditor(" + i + ");");
tempImage.Height = 100;
tempImage.Width = 100;
pnl.Controls.Add(tempImage);
divImages.Controls.Add(tempImage);
}
* EDIT *
A colleague of mine noticed that some of my images had strange header information contained in the image file. We suspect that this might be from photoshop saving files as all files which have not been created from a specific person seem to display fine.
Having done this myself I've never encountered this problem. Does this occur for the same image(s) or is it semi-random?
Check the jpegs are viewable in IE normally (i.e. as a source file not through your handler), check the HTTP traffic with fiddler, and check the bytestream going out looks good.

Categories