I have a scenario where I download files from a storage to the temp folder. Then I call a framework to process the file and this framework needs the file during the lifetime of the application. When the applications exits I close all files but when the application crashs the file does not get deleted. There can be multiple instances of the application.
What is the best way to get these files deleted? I have 2 ideas:
It is okay to delete the files on the next run of the application. My idea is to use one main folder in the temp paths and one folder inside where the name of the folder is equal to the process id of the current process. When I run the application the next time I check all folders and also check if there is a process with this id running. If not I delete the folder. The problem with this solution is, that it needs admin permissions to run Process.GetProcessById
I create one folder per process and use a lock file. I keep a stream opened with DeleteOnClose equal to true. On the next run of the application, I check all folders and their lock files. If there is no lock file or I can delete it I also delete the folder.
Do you have any other ideas?
EDIT: Implemented solution #2, works like a charm.
There is no inbuilt way to delete temp files automatically. But you can achieve this on reboot with a simple call to the WinAPI function MoveFileEx specifying the flag value MOVEFILE_DELAY_UNTIL_REBOOT - your temp file will be gone next time you boot (if it still exists). Here are two examples of doing this from C#: 1, and 2.
Calling this function has the effect of putting an entry into HKLM\System\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations key in the registry (you can enter that value directly but this is the preferred way to do it). If you do this before doing your work with the temp file, then delete the temp file when you're finished with it. If your process crashes then all files that have been worked with will already have an entry in the registry, if the files are already gone upon the next reboot then nothing happens (i.e. there is no error raised or anything).
Related
I've scoured for information, but I just fear I may be getting in over my head here as I am not proficient in multi-threading. I have desktop app that needs to create a read-only, temp copy of an existing file, open the file in it's default application and then delete the file once the user is done viewing it.
It must open read only as the user may try and save it thinking it's the original file.
To do this I have created a new thread which copies the file to a temp path, set's the files attributes, attaches a Process handler to it and then "waits" and deletes the file on exit. The advantage of this is that the thread will continue to run even after the program has exited (so it seems anyway). This way the file will still delete even if the user keeps it open longer than the program.
Here is my code. The att object holds my file information.
new Thread(() =>
{
//Create the temp file name
string temp = System.IO.Path.GetTempPath() + att.FileNameWithExtension;
//Determine if this file already exists (in case it didn't delete)
//This is important as setting it readonly will create User Access Control (UAC) issues for overwritting
//if the read only attribute exists
if (File.Exists(temp)) { File.SetAttributes(temp, FileAttributes.Temporary ); }
//Copy original file to temp location. Overwrite if it already exists due to previous deletion failure
File.Copy(att.FullFileName, temp, true);
//Set temp file attributes
File.SetAttributes(temp, FileAttributes.Temporary | FileAttributes.ReadOnly);
//Start process and monitor
var p = Process.Start(temp);//Open attachment in default program
if (p != null) { p.WaitForExit(); }
//After process ends remove readonly attribute to allow deletion without causing UAC issues
File.SetAttributes(temp, FileAttributes.Temporary);
File.Delete(temp);
}
).Start();
I've tested it and so far it seems to be doing the job, but it all feels so messy. I honestly feel like there should be an easier way to handle this that doesn't involve the need to creating new threads. If looked into copying files into memory first, but I can't seem to figure out how to open them in their default application from a MemoryStream.
So my question is.
Is there a better way to achieve opening a readonly, temp copy of a file that doesn’t write to disk first?
If not, what implications could I face from taking the mutlithreaded approach?
Any info is appreciated.
Instead of removing the temporary file(s) on shutdown, remove the 'left over' files at startup.
This is often easier to implement than trying to ensure that such cleanup code runs at process termination and handles those 'forced' cases like power fail, 'kill -9', 'End process' etc.
I like to create a 'temp' folder for such files: all of my apps scan and delete any files in such a folder at startup and the code can just be added to any new project without change.
At particular location, directory with files inside it gets created. At that location another software continuously checks for new directory and then process that directory immediately.
Now, The problem is,
When I want to copy files to that location, I first create directory, and then copy all the files inside it. But as soon as directory gets created, the processing software process that directory and deletes it. And my copy function raise an exception.
if (!Directory.Exists(DestinationPath))
Directory.CreateDirectory(DestinationPath);
CopyFilesInDirectory();//Throw exception as directory created above processed by another software and deleted.
What I want to do is, stop the parent directory from getting processed by another software until my copying of files gets completed.
If you have a small window of opportunity, you can block the other software from deleting the directory by keeping a file open in it, like
if (!Directory.Exists(DestinationPath))
Directory.CreateDirectory(DestinationPath);
using(var lockFile = File.Open(Path.Combine(DestinationPath, "_lock"))
{
CopyFilesInDirectory();
}
If the other software may delete the directory after the second and third line, you will need to try again until you get the lock file open.
The cleaner way would be to modify the other software, of course.
There are several threads on SO that describe how to check which application creates a file with tools like Sysinternals process monitor. Is something like this possible programmatically from .net?
Background: My program has to remote-control a proprietary third party application using its automation interface, and one of the functions I need from this application has a bug where it creates a bunch of temporary files in %TEMP% that are called tmpXXXX.tmp (the same as .net's Path.GetTempFileName() does) but does not delete them. This causes the C drive to become full over time, eventually failing the application. I already filed a bug to the manufacturer, but we need a temporary workaround for the time being, so I thought of putting a FileSystemWatcher on %TEMP% that watches tmp*.tmp, collects these files, and after the operation on the third-party application finishes, deletes them. But this is risky as another application might also write files with the same file name pattern to %TEMP% so I only want to delete those created by NastyBuggyThirdPartyApplication.exe.
Is this anyhow possible?
This kind of things is possible, but maybe a bit tricky.
To know who created the file, look at the user that owns it. Therefore you might need to create a specific user, and that application will run under this specific user. In order to do that, you need to create a small application that will start your buggy app by impersonating another user, so anything done within the app will be under this user so as file creating...
I don't know how to monitor and get triggered when a file is created, but nothing can prevent you from setting a timer that wakes up every five or ten minutes, then checks if any file in the directory is owned by the application user and closed, so it deletes it.
Maybe if they react fast for this bug fixing, you won't need your app very long time. So another solution, if possible might just to change the Temp folder into another drive, which has lots of space...
One solution is that you use a FileWatcher to automatically delete all the files but before deleting you should check if the file is not currently locked or used by other process, for example the Sysinternal Suite has a tool called handle.exe that can do this. Use it from the command line:
handle.exe -a
You can invoke this from a c# program (there might be some performance issues though)
So what you would do is when a file is created you verify if it is in use or locked (for example u can use the code provided in Is there a way to check if a file is in use?) and then delete it.
Most of the time when an app is using a temp file it will lock it to prevent just what you fear, that you might delete files from other processes.
As far as I can tell there is no sure way to identify which process created a specific file.
I'm getting a very intermittent "directory not empty" error trying to delete a directory from c# code but when I look, the directory seems to be empty.
The actual scenario is this: process A invokes process B using a synchronous .Net remoting call, process B deletes the files from the directory and then returns to process A which deletes the directory itself. The disk is a locally attached NTFS Disk (probably SATA).
I'm wondering if there's a possible race Condition with NTFS when you have two processes cooperating in this way, where process B's delete calls have not been completely flushed to the file system?
Of course the more obvious answer is that the directory was genuinely not empty at the time and something else emptied it before I looked at it, but I don't see how this could happen in my current application because there's no other process which would delete the files.
Sure, deleting directories is a perilous adventure on a multi-tasking operating system. You always have the risk that another process has a file opened. Failure to delete the directory in your scenario has two primary reasons:
Particularly trouble-some are the kind of processes that opens the file in a way that does not prevent you from deleting the file but still makes deleting the directory fail with this error. Search indexers and anti-malware fit this category. They'll open the file with delete sharing, FileShare.Delete in a .NET program. Deleting the file works fine. But the file won't disappear until they close the file handle. So you can't delete the directory until they do.
Very hard to diagnose is a process that has the directory selected as its current working directory. Environment.CurrentDirectory in a .NET program. Explorer tends to trigger this, just looking at the directory is enough to prevent it from getting deleted.
These mishaps occur totally outside of your control. You will need to deal with them, catching the exception is required. Little you can do by try again later, there is however no upper limit on how long you'll have to wait. Renaming the directory and giving it a "trash" name is a good strategy. Note how the recycle bin in Windows essentially follows this scenario, good for more than just recycling :)
I have a service running on a webserver that waits for a zip to be dropped in a folder, extracts it, and then moves it to a certain directory. Since we want to replace the directory in question, it renames the existing folder (very large folder, takes a couple minutes to delete), then moves the extracted files in its place, then deletes the old folder. The problem is: when it tries to rename the existing folder, it gets 'Access to the path '<>' is denied.', I believe because the folder is in constant use by the webservice. Is there a way I can force the folder to rename, or take control and wait for it to not be in use? Or is there another way I can accomplish this goal?
You can't "force" a rename while any process holds an underlying operating system handle to the folder(it would be horrible if you were able to do that).
You can:
Implement pause/resume functionality for the webservice so it can be told to pause its work and release the handles, then resume after you are done.
or
Stop the webserice completely, do your work, then start the webservice