I have a service running on a webserver that waits for a zip to be dropped in a folder, extracts it, and then moves it to a certain directory. Since we want to replace the directory in question, it renames the existing folder (very large folder, takes a couple minutes to delete), then moves the extracted files in its place, then deletes the old folder. The problem is: when it tries to rename the existing folder, it gets 'Access to the path '<>' is denied.', I believe because the folder is in constant use by the webservice. Is there a way I can force the folder to rename, or take control and wait for it to not be in use? Or is there another way I can accomplish this goal?
You can't "force" a rename while any process holds an underlying operating system handle to the folder(it would be horrible if you were able to do that).
You can:
Implement pause/resume functionality for the webservice so it can be told to pause its work and release the handles, then resume after you are done.
or
Stop the webserice completely, do your work, then start the webservice
Related
There are several threads on SO that describe how to check which application creates a file with tools like Sysinternals process monitor. Is something like this possible programmatically from .net?
Background: My program has to remote-control a proprietary third party application using its automation interface, and one of the functions I need from this application has a bug where it creates a bunch of temporary files in %TEMP% that are called tmpXXXX.tmp (the same as .net's Path.GetTempFileName() does) but does not delete them. This causes the C drive to become full over time, eventually failing the application. I already filed a bug to the manufacturer, but we need a temporary workaround for the time being, so I thought of putting a FileSystemWatcher on %TEMP% that watches tmp*.tmp, collects these files, and after the operation on the third-party application finishes, deletes them. But this is risky as another application might also write files with the same file name pattern to %TEMP% so I only want to delete those created by NastyBuggyThirdPartyApplication.exe.
Is this anyhow possible?
This kind of things is possible, but maybe a bit tricky.
To know who created the file, look at the user that owns it. Therefore you might need to create a specific user, and that application will run under this specific user. In order to do that, you need to create a small application that will start your buggy app by impersonating another user, so anything done within the app will be under this user so as file creating...
I don't know how to monitor and get triggered when a file is created, but nothing can prevent you from setting a timer that wakes up every five or ten minutes, then checks if any file in the directory is owned by the application user and closed, so it deletes it.
Maybe if they react fast for this bug fixing, you won't need your app very long time. So another solution, if possible might just to change the Temp folder into another drive, which has lots of space...
One solution is that you use a FileWatcher to automatically delete all the files but before deleting you should check if the file is not currently locked or used by other process, for example the Sysinternal Suite has a tool called handle.exe that can do this. Use it from the command line:
handle.exe -a
You can invoke this from a c# program (there might be some performance issues though)
So what you would do is when a file is created you verify if it is in use or locked (for example u can use the code provided in Is there a way to check if a file is in use?) and then delete it.
Most of the time when an app is using a temp file it will lock it to prevent just what you fear, that you might delete files from other processes.
As far as I can tell there is no sure way to identify which process created a specific file.
I have a scenario where I download files from a storage to the temp folder. Then I call a framework to process the file and this framework needs the file during the lifetime of the application. When the applications exits I close all files but when the application crashs the file does not get deleted. There can be multiple instances of the application.
What is the best way to get these files deleted? I have 2 ideas:
It is okay to delete the files on the next run of the application. My idea is to use one main folder in the temp paths and one folder inside where the name of the folder is equal to the process id of the current process. When I run the application the next time I check all folders and also check if there is a process with this id running. If not I delete the folder. The problem with this solution is, that it needs admin permissions to run Process.GetProcessById
I create one folder per process and use a lock file. I keep a stream opened with DeleteOnClose equal to true. On the next run of the application, I check all folders and their lock files. If there is no lock file or I can delete it I also delete the folder.
Do you have any other ideas?
EDIT: Implemented solution #2, works like a charm.
There is no inbuilt way to delete temp files automatically. But you can achieve this on reboot with a simple call to the WinAPI function MoveFileEx specifying the flag value MOVEFILE_DELAY_UNTIL_REBOOT - your temp file will be gone next time you boot (if it still exists). Here are two examples of doing this from C#: 1, and 2.
Calling this function has the effect of putting an entry into HKLM\System\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations key in the registry (you can enter that value directly but this is the preferred way to do it). If you do this before doing your work with the temp file, then delete the temp file when you're finished with it. If your process crashes then all files that have been worked with will already have an entry in the registry, if the files are already gone upon the next reboot then nothing happens (i.e. there is no error raised or anything).
I have a project that uses the .net FileSystemWatcher to watch a Samba network share for video files. When it sees a file, it adds it to an encode queue. When files are dequeued, they are moved to a local directory where the process then encodes the file to several different formats and spits them out to an output directory.
The problem arises because the video files are so big, that it often takes several minutes for them to copy completely into the network directory, so when a file is dequeued, it may or may not have completely finished being copied to the network share. When the file is being copied from a windows machine, I am able to work around it because trying to move a file that is still being copied throws an IOException. I simply catch the exception and retry every few seconds until it is done copying.
When a file is dropped into the Samba share from a computer running OS X however, that IOException is not thrown. Instead, a partial file is copied to the working directory which then fails to encode because it is not a valid video file.
So my question is, is there any way to make the FileSystemWatcher wait for files to be completely written before firing its "Created" event (based on this question I think the answer to that question is "no")? Alternatively, is there a way to get files copied from OS X to behave similarly to those in windows? Or do I need to find another solution for watching the Samba share? Thanks for any help.
Option 3. Your best bet is to have a process that watches the incoming share for files. When it sees a file, note its size and/or modification date.
Then, after some amount of time (like, 1 or 2 seconds), look again. Note any files that were seen before and compare their new sizes/mod dates to the one you saw last time.
Any file that has not changed for some "sufficiently long" period of time (1s? 5s?) is considered "done".
Once you have a "done" file, MOVE/rename that file to another directory. It is from THIS directory that your loading process can run. It "knows" that only files that are complete are in this directory.
By having this two stage process, you are able to later possibly add other rules for acceptance of a file, since all of those rules must pass before the file gets moved to its proper staging area (you can check format, check size, etc.) beyond a simple rule of just file existence.
Your later process can rely on file existence, both as a start mechanism and a restart mechanism. When the process restarts after failure or shut down, it can assume that any files in the second staging are either new or incomplete and take appropriate action based on its own internal state. When the processing is done it can choose to either delete the file, or move it to a "finished" area for archiving or what not.
This question already has answers here:
Why does rename a loaded .net assembly work?
(3 answers)
Closed 5 years ago.
We are trying to push updates to multiple servers at once and my manager has found that it is possible to rename running .exe file. Using that knowledge he wants to rename a running exe and copy over a new version of said exe such that anyone running their in memory copy of foo.exe are fine and anybody who opens a shortcut pointing to foo.exe will get a new copy with updates applied.
I guess I need to clarify, He doesn't expect the old copy to magically update, he just expects them to keep running the old copy until they open the exe again, in which case it will then open the new one that has the name of the old one.
It sometimes throws an exception that the file is in use on his program but if he tries renaming it in a loop it will eventually succeed. On my machine I have yet to be able to get it to work even in a loop.
My first and main question is this: Is it ever acceptable to do this. Should renaming a running executable ever be a valid scenario?
Secondly, if it is a valid scenario then how could one reliably do this? Our current thoughts are try a bunch of times using File.Move (C#) to do a rename and if it doesn't work then write out to an error log so it can be handled manually.
An airplane mechanic and a surgeon meet in a bar. The mechanic says "you know, we have basically the same job. We take broken stuff out and put new, better parts in." The surgeon says "yeah, but you don't have to keep the plane flying as you're making the repairs!"
Trying to update an application by moving files while the application is running seems about as dangerous as trying to fix an airplane in flight. Possible? Sure. Greatly increased risk of catestrophic crash? Yep.
If the application you are updating is a managed application, consider using ClickOnce Deployment. That way, the next time someone runs the application, if there is a new version available it will be copied down and installed automatically. That's much more safe and pleasant than trying to mess with an application while its still running.
No, this is not acceptable. Do not do this. This is not a valid deployment mechanism. This should have been yours or his first clue:
It sometimes throws an exception that the file is in use on his program but if he tries renaming it in a loop it will eventually succeed.
And it won't work, anyway. His theory is quite wrong:
Using that knowledge he wants to rename a running exe and copy over a new version of said exe such that anyone running their in memory copy of foo.exe are fine and anybody who opens a shortcut pointing to foo.exe will get a new copy with updates applied.
Specifically, the copy in memory will not be automatically replaced with the new executable just because it has the same name. The reason that you're allowed to rename the executable in the first place is because the operating system is not using the file name to find the application. The original executable will still be loaded, and it will remain loaded until you explicitly unload it and load the new, modified executable.
Notice how even modern web browsers like Chrome and Firefox with their super fancy automatic, in the background, no one ever notices that they exist, updaters still have to close and relaunch the application in order to apply the updates.
Don't worry about shooting the messenger here. It's more likely that your customers and your tech support department will shoot you first.
See number 1.
In our organization, we solved the problem of Updates by having two release folders say EXE_A and EXE_B. We also have a release folder called EXE which only has links ALL of which points to either to EXE_A or EXE_B from which the user runs the applications.
When we publish a new version of the program, we publish it to the folder that is not referenced in the links and then update the links (EXE). In this way, you do not get into exceptions that users are holding the application / assemblies. Also if a user wants to run the updated version, all he need to do is close / re-execute the link in EXE folder.
If you use Windows Vista/Server2k8 or newer you could use mklink to create a symbolic link to the folder containing your application and start the application out of the "symblic linked folder" and then at the update create a new folder, e.g. "AppV2" and change the SymLink to that folder, so the next time the user restarts the application he starts it out of the new folder without noticing.
Renaming open files is ALWAYS a bad choice!
But in general I would think of a better deployment strategy anyway, because if you need to use such "hacks" it is always a messy situation. I don't know your application, but maybee ClickOnce would be a point to start, because you can configure it to check for updates on every start...
What does one need to take care of when creating a method to move (cut) a batch of file from one directory to another?
Let's say the method signature is Move(filter, sourceFolder, destinationFolder, overwrite). What do I need to take care of to avoid the risk of data loss especially when overwriting the original file and deleting of the source file is taken into account?
Several possible scenario I am worried of: error occurs when a move is in progress, moved a file but the file are somehow corrupted, deleted a namesake file in order to allow the new file to move but then error happens when moving the new file, etc.
I'm using .net's System.IO namespace for the move operations.
Without transactions, the safest way is to copy, verify and then delete. It is up to you if you want to move per file (this is how windows does it, a move operation can fail leaving you with half of the files moved) or to allow only the entire batch to be moved, or none at all.
You would have to make decisions on how to respond to files that have been modified during the move, source files that cannot be deleted afterwards, or destination files that have already been opened when you're performing a rollback.