I'm extracting a ZIP file. This ZIP contains image files and an Excel file with a product list. When articles of different sizes are listed the article refers to the same image. I copy the image file to a local folder and write the (compressed) binary data to SQL server database.
So when it gets to the point where a JPG file shall be processed a second time, I get this exception, although I dispose the image object.
Worksheet ws;
string root = "C:\\images\\";
string file;
string importFolder = "C:\\import\\;
Dictionary <string, object> ins;
Image im;
Image th;
//Worksheet has been opened before
//ZIP has been extracted before to C:\import\
for (i = 2; i <= ws.Dimension.End.Row; i++) {
ins = new Dictionary<string, object>(); //Dictionary to write data to database
file = ws.Cells[i, 4].Text;
System.IO.File.Copy(importFolder + "\\" + file, root + "\\" + file, true); // <-- Here the exception is thrown in the second iteration
im = Image.FromFile(root + "\\" + file);
im = im.GetBetterThumbnail(1024);
byte[] im_data = im.GetJpgByteArray(85);
ins.Add("url", "www.test.de/images/" + file);
ins.Add("image_data", im_data);
ins.Add("image_size", im_data.Length);
//image will be written to database
im.Dispose();
im = null;
im_data = null;
//With these initializations there shouldn't be thrown an exception
} // end for
What am I missing? With resetting the Image object and byte array, there shouldn't be another reference to the image file.
I had a look on this
IOException: The process cannot access the file 'file path' because it is being used by another process
but I couldn't figure out, how to adept to my topic.
Yes, I could store all file names just to copy them once, but I think that's the lazy way.
Kind regards
You assign a value to the variable im two times.
One time you use im = Image.FromFile(root + "\\" + file) and the other time you use im = im.GetBetterThumbnail(1024). Could it be that this opens two handles that need to be closed?
Besides, it's better to use the using statement. Then you don't have to take care of the disposing by yourself.
For example like this:
for (i = 2; i <= ws.Dimension.End.Row; i++)
{
ins = new Dictionary<string, object>(); //Dictionary to write data to database
file = ws.Cells[i, 4].Text;
System.IO.File.Copy(importFolder + "\\" + file, root + "\\" + file, true);
using (im = Image.FromFile(root + "\\" + file))
{
// I guess that this method creates its own handle
// and therefore also needs to be disposed.
using (thumbnail = im.GetBetterThumbnail(1024))
{
byte[] im_data = thumbnail.GetJpgByteArray(85);
ins.Add("url", "www.test.de/images/" + file);
ins.Add("image_data", im_data);
ins.Add("image_size", im_data.Length);
//image will be written to database
}
}
} // end for
I got the issue solved by using a stream. The memory management works really better now.
New code:
//im = Image.FromFile(root + "\\" + file);
im = Image.FromStream(File.Open(root + "\\" + file, FileMode.Open));
So could it be that this is another 'Microsoft feature'?
Related
I am writing an application that allows the user to import a picture from the computer and view it in my application. The user also has an import button under the picture and he/she can select the picture's destination in the computer.
My problem is that I get following exception:
System.IO.IOException: 'The process cannot access the file 'D:\Workspaces_Foo\Foo\Theme\MyPic.png' because it is being used by another process.'
when I debug it I find the line where it's breaking. It is in:
string image = Constants.ImageName;
if (image != "NONE")
{
using (var stream = File.OpenRead(SourceDir + ImageName))
{
Directory.CreateDirectory(Path.GetDirectoryName(DestinationDir + "\\" + image));
File.Copy(SourceDir + "\\" + image, DestinationDir + "\\" + image, true); //breaks here...
}
}
I assumed that by using a filestream it would've allowed me to continue with my picture transfer. Can anyone tell me why I am getting this error?
You are opening a stream for the file. This opens the file and marks it as in use.
Then you use File.Copy to copy it, but the Operating system says the file is in use (it isn't clever enough to notice that it is in use by your own process!)
Forget the stream, try this instead.
string image = Constants.ImageName;
if (image != "NONE")
{
Directory.CreateDirectory(Path.GetDirectoryName(DestinationDir + "\\" + image));
File.Copy(SourceDir + "\\" + image, DestinationDir + "\\" + image, true);
}
I want rewrite my txt file(I need delete lines). I use this code:
string cesta3 = cesta + "database.txt";
string file = new StreamReader(cesta3).ReadToEnd();
var linesToKeep = File.ReadLines(cesta3)
.Where(l => l != "Meno: " + textBox1.Text + " PN " + textBox2.Text);
But I don't know how to save my file with that same name.I try this:
File.WriteAllLines(cesta3, linesToKeep); // <== exception
var tempFile = Path.GetTempFileName();
File.Move(tempFile + ".txt", cesta3);
But it throws exception:
Additional information: Access to the path "'C:\Users\Lenovo\documents\visual studio 2015\Projects\attendancer\attendancer\bin\Debug\database.txt' is denied."
How can I do ?
This line of code locks the file and prevent it from moving: string file = new StreamReader(cesta3).ReadToEnd();.
Fix:
you can properly close StreamReader after reading file text from it with using
alternatively since there is nothing in the sample using that StreamReader or result of ReadToEnd (file variable) you can simply remove that line.
Side note: File.Move will throw more exceptions after you are done with the first one as source file is unlikely to exist - I'm not sure what you were trying to do with that call.
I am running a web server with a mongo database. The database stores records, which include pictures stored as base64 encoded strings.
I am writing an api call that fetches several of these images from the records, builds them in .jpg images, and adds them to a zip file stored on the server.
The problem I'm running into is that the call returns with an OutOfMemory exception after only a few records, even when the total size of the images is less than 10mb, with each image being around 500kb. Here is the code I am using:
using (ZipFile zipFile = new ZipFile())
{
var i = 0;
foreach (ResidentialData resident in foundResidents)
{
MemoryStream tempstream = new MemoryStream();
Image userImage1 = LoadImage(resident.AccountImage);
Bitmap tmp = new Bitmap(userImage1);
tmp.Save(tempstream, ImageFormat.Jpeg);
tempstream.Seek(0, SeekOrigin.Begin);
byte[] imageData = new byte[tempstream.Length];
tempstream.Read(imageData, 0, imageData.Length);
zipFile.AddEntry(i + " | " + resident.Initials + " " + resident.Surname + ".jpg", imageData);
i++;
tempstream.Dispose();
}
zipFile.Save(#"C:\temp\test.zip");
}
Any ideas as to what might be eating up all the memory? I don't see how this is possible as the machine it is running on has 32gb of RAM.
You need to dispose your bitmaps.
Change this:
tempstream.Dispose();
...to:
tempstream.Dispose();
tmp.Dispose();
You might want to look into using using() blocks as they allow you to define; allocate and automatically release the resource.
e.g.
using (var x = new SomethingThatNeedsDisposing())
{
// do something with x
} // <----- at this point .NET will call x.Dispose() for you
Hi i have the following Function:
private void CreateRoomImage(string path)
{
Directory.CreateDirectory(path);
var file = "";
foreach (PanelView panelView in pv)
{
var RoomImage = GetRaumImageName(panelView.Title);
file = path + GetImageFile(RoomImage);
if (File.Exists(file))
{
File.Delete(file);
}
using (var img = GetRaumImage(panelView.Title, panelView))
{
ImageWriter imgWriter = new ImageWriter(ImageFormat.Bmp);
imgWriter.Save(img, file);
}
}
}
My Problem is that everytime i try to Delete the existing File my programm is throwing an exception:
The process can not access the file because it is being used by another process
Is there a solution for this Problem? How can i delete the existing image?
It seems like this could be a weird race condition where the file.exists hasn't released the resource and it is trying to delete prior to release. I would probably try something along these lines.
try
{
File.Delete(file);
}
catch
{
//File does not exist or another error occurred.
}
I found it on my Own.
In PageLoad i use thet "file" also. I forgot to dispose the Image:
foreach (PanelView panel in pv)
{
path = Request.PhysicalPath.Substring(0, Request.PhysicalPath.LastIndexOf('\\') + 1) + subPath + "\\" + GetRaumImageName(panel.Title);
bitMap = new Bitmap(path + ".bmp");
b0 = BmpToMonochromConverter.CopyToBpp(bitMap, 1);
// bounce.updateInterface.UpdateProductImage(b0, panel.Panel.PRODUCT_ID, "", ref update_Handle);
bitMap.Dispose();
}
Anyway thanks for help!
Why delete file in the first place? Can't you make it overwrite the existing file if it already exists?
Another thing you can do is to pause the thread for like 400ms until OS does it's thing and files are completely deleted and all streams closed.
Problem is now solved. Mistake by me that I hadn't seen before.
I am pretty new to coding in general and am very new to C# so I am probably missing something simple. I wrote a program to pull data from a login website and save that data to files on the local hard drive. The data is power and energy data for solar modules and each module has its own file. On my main workstation I am running Windows Vista and the program works just fine. When I run the program on the machine running Server 2003, instead of the new data being appended to the files, it just overwrites the data originally in the file.
The data I am downloading is csv format text over a span of 7 days at a time. I run the program once a day to pull the new day's data and append it to the local file. Every time I run the program, the local file is a copy of the newly downloaded data with none of the old data. Since the data on the web site is only updated once a day, I have been testing by removing the last day's data in the local file and/or the first day's data in the local file. Any time I change the file and run the program, the file contains the downloaded data and nothing else.
I just tried something new to test why it wasn't working and think I have found the source of the error. When I ran on my local machine, the "filePath" variable was set to "". On the server and now on my local machine I have changed the "filePath" to #"C:\Solar Yard Data\" and on both machines it catches the file not found exception and creates a new file in the same directory which overwrites the original. Anyone have an idea as to why this happens?
The code is the section that download's each data set and appends any new data to the local file.
int i = 0;
string filePath = "C:/Solar Yard Data/";
string[] filenamesPower = new string[]
{
"inverter121201321745_power",
"inverter121201325108_power",
"inverter121201326383_power",
"inverter121201326218_power",
"inverter121201323111_power",
"inverter121201324916_power",
"inverter121201326328_power",
"inverter121201326031_power",
"inverter121201325003_power",
"inverter121201326714_power",
"inverter121201326351_power",
"inverter121201323205_power",
"inverter121201325349_power",
"inverter121201324856_power",
"inverter121201325047_power",
"inverter121201324954_power",
};
// download and save every module's power data
foreach (string url in modulesPower)
{
// create web request and download data
HttpWebRequest req_csv = (HttpWebRequest)HttpWebRequest.Create(String.Format(url, auth_token));
req_csv.CookieContainer = cookie_container;
HttpWebResponse res_csv = (HttpWebResponse)req_csv.GetResponse();
// save the data to files
using (StreamReader sr = new StreamReader(res_csv.GetResponseStream()))
{
string response = sr.ReadToEnd();
string fileName = filenamesPower[i] + ".csv";
// save the new data to file
try
{
int startIndex = 0; // start index for substring to append to file
int searchResultIndex = 0; // index returned when searching downloaded data for last entry of data on file
string lastEntry; // will hold the last entry in the current data
//open existing file and find last entry
using (StreamReader sr2 = new StreamReader(fileName))
{
//get last line of existing data
string fileContents = sr2.ReadToEnd();
string nl = System.Environment.NewLine; // newline string
int nllen = nl.Length; // length of a newline
if (fileContents.LastIndexOf(nl) == fileContents.Length - nllen)
{
lastEntry = fileContents.Substring(0, fileContents.Length - nllen).Substring(fileContents.Substring(0, fileContents.Length - nllen).LastIndexOf(nl) + nllen);
}
else
{
lastEntry = fileContents.Substring(fileContents.LastIndexOf(nl) + 2);
}
// search the new data for the last existing line
searchResultIndex = response.LastIndexOf(lastEntry);
}
// if the downloaded data contains the last record on file, append the new data
if (searchResultIndex != -1)
{
startIndex = searchResultIndex + lastEntry.Length;
File.AppendAllText(filePath + fileName, response.Substring(startIndex+1));
}
// else append all the data
else
{
Console.WriteLine("The last entry of the existing data was not found\nin the downloaded data. Appending all data.");
File.AppendAllText(filePath + fileName, response.Substring(109)); // the 109 index removes the file header from the new data
}
}
// if there is no file for this module, create the first one
catch (FileNotFoundException e)
{
// write data to file
Console.WriteLine("File does not exist, creating new data file.");
File.WriteAllText(filePath + fileName, response);
//Debug.WriteLine(response);
}
}
Console.WriteLine("Power file " + (i + 1) + " finished.");
//Debug.WriteLine("File " + (i + 1) + " finished.");
i++;
}
Console.WriteLine("\nPower data finished!\n");
Couple of suggestions wich I think will probably resolve the issue
First change your filePath string
string filePath = #"C:\Solar Yard Data\";
create a string with the full path
String fullFilePath = filePath + fileName;
then check to see if it exists and create it if it doesnt
if (!File.Exists(fullFilePath ))
File.Create(fullFilePath );
put the full path to the file in your streamReader
using (StreamReader sr2 = new StreamReader(fullFilePath))