I am trying to use the FileSystemWatcher - and am having some luck..
The goal is to MOVE the file that gets created, from the monitored folder, to a new folder.
But... have hit 2 snags. Firstly, if I move 3 files into a folder at once (Select 3 files, ctrl+x, and then ctrl+c into my Monitor Folder), the monitor only triggers for the first file. The other 2 don't get processed.
FileSystemWatcher fsw = new FileSystemWatcher(FolderToMonitor);
fsw.Created += new FileSystemEventHandler(fsw_Created);
bool monitor = true;
while (monitor)
{
fsw.WaitForChanged(WatcherChangeTypes.All, 2000);
if (Console.KeyAvailable)
{
monitor = false;
}
}
Show("User has quit the process...", ConsoleColor.Yellow);
Console.ReadKey();
Is there a way to make it see all 3?
Secondly, if I move a file into the monitor folder, from a different drive, it takes a few seconds to copy the file into the folder. However, the monitor triggers as soon as the file starts copying in.. so therefore, is read only, and not ready to be moved.
Is there a way I can wait for the file to complete it's copy into the monitor folder, before I process it?
According to the msdn documentation:
The Windows operating system notifies
your component of file changes in a
buffer created by the
FileSystemWatcher. If there are many
changes in a short time, the buffer
can overflow. This causes the
component to lose track of changes in
the directory, and it will only
provide blanket notification.
Increasing the size of the buffer with
the InternalBufferSize property is
expensive, as it comes from non-paged
memory that cannot be swapped out to
disk, so keep the buffer as small yet
large enough to not miss any file
change events. To avoid a buffer
overflow, use the NotifyFilter and
IncludeSubdirectories properties so
you can filter out unwanted change
notifications.
Perhaps that explains your issue?
Also note that cutting and pasting files from one directory to another is basically a mere renaming operation, therefore you should use the Renamed event to detect them.
As for your other problem: try using the Changed event together with Created, as I believe both will be raised exactly once for a file (note that moving a file from another drive in not a simple renaming operation: it's copy and delete), so the Changed event should indicate when the file copy operation has been completed (i.e. it won't fire until the file is complete).
Related
I've scoured for information, but I just fear I may be getting in over my head here as I am not proficient in multi-threading. I have desktop app that needs to create a read-only, temp copy of an existing file, open the file in it's default application and then delete the file once the user is done viewing it.
It must open read only as the user may try and save it thinking it's the original file.
To do this I have created a new thread which copies the file to a temp path, set's the files attributes, attaches a Process handler to it and then "waits" and deletes the file on exit. The advantage of this is that the thread will continue to run even after the program has exited (so it seems anyway). This way the file will still delete even if the user keeps it open longer than the program.
Here is my code. The att object holds my file information.
new Thread(() =>
{
//Create the temp file name
string temp = System.IO.Path.GetTempPath() + att.FileNameWithExtension;
//Determine if this file already exists (in case it didn't delete)
//This is important as setting it readonly will create User Access Control (UAC) issues for overwritting
//if the read only attribute exists
if (File.Exists(temp)) { File.SetAttributes(temp, FileAttributes.Temporary ); }
//Copy original file to temp location. Overwrite if it already exists due to previous deletion failure
File.Copy(att.FullFileName, temp, true);
//Set temp file attributes
File.SetAttributes(temp, FileAttributes.Temporary | FileAttributes.ReadOnly);
//Start process and monitor
var p = Process.Start(temp);//Open attachment in default program
if (p != null) { p.WaitForExit(); }
//After process ends remove readonly attribute to allow deletion without causing UAC issues
File.SetAttributes(temp, FileAttributes.Temporary);
File.Delete(temp);
}
).Start();
I've tested it and so far it seems to be doing the job, but it all feels so messy. I honestly feel like there should be an easier way to handle this that doesn't involve the need to creating new threads. If looked into copying files into memory first, but I can't seem to figure out how to open them in their default application from a MemoryStream.
So my question is.
Is there a better way to achieve opening a readonly, temp copy of a file that doesn’t write to disk first?
If not, what implications could I face from taking the mutlithreaded approach?
Any info is appreciated.
Instead of removing the temporary file(s) on shutdown, remove the 'left over' files at startup.
This is often easier to implement than trying to ensure that such cleanup code runs at process termination and handles those 'forced' cases like power fail, 'kill -9', 'End process' etc.
I like to create a 'temp' folder for such files: all of my apps scan and delete any files in such a folder at startup and the code can just be added to any new project without change.
I am writing a tool that listens a location, preferably remote location, and if a new folder or file created, it will download it to a local location.
Currently I am listening the remote folder with FileSystemWatcher, when a new folder/file created, I start a timer and if timer reaches X minutes it starts to copy it to local.
Creating a new folder or file in "watched " folder triggers FileSystemWatcher.Changed but it sometimes fail if there are a lot of sub-directories and if there is a large file copying to watched folder, it only detects it when copying started and my timer can finish until it is finished.
So:
I have 3 remote computers/locations, A,B,C
A starts to copy some folders/files to B and
C listens to B .
How can C check if A is finished copying with or without FileSystemWatcher?
I don't want to constantly compare B and C and copy rest of the files.
I checked other questions but they don't answer or I already did implement those solutions.
1, 2,3
I think you are asking about the system change journal. That said, there will always be cases where the file is in use, has been deleted, updated, etc. between the time you detect you need to copy it and when you really start copying. Here's an old but accurate article that can give you more details.
http://www.microsoft.com/msj/0999/journal/journal.aspx
From the article abstract:
"The Windows 2000 Change Journal is a database that contains a list of every change made to the files or directories on an NTFS 5.0 volume. Each volume has its own Change Journal database that contains records reflecting the changes occurring to that volume's files and directories."
Scroll down to the heading ReasonMask and ReturnOnlyOnClose
I have a FileSystemWatcher that checks multiple directories if there are any files created.
((System.ComponentModel.ISupportInitialize)(FileMonitor)).BeginInit();
FileMonitor.EnableRaisingEvents = true;
FileMonitor.Created += new FileSystemEventHandler(FileMonitor_Created);
FileMonitor.Path = Path.ToString();
FileMonitor.IncludeSubdirectories = true;
FileMonitor.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.FileName | NotifyFilters.DirectoryName | NotifyFilters.Attributes;
((System.ComponentModel.ISupportInitialize)(FileMonitor)).EndInit();
For some reason the FileMonitor_Created event is not always fired when running the application, even though it should. It feels random...
However, if I put a breakpoint at the FileMonitor_Created method, it works perfectly: The event fires everytime it should, if the breakpoint is set.
I've tried setting an InterBufferSize for the FileMonitor, but that had no effect.
Update
I added the Changed event to the Filemonitor and gave it the same handler as for the Created event. Somehow it works now, although the file is actually created, not changed.
I'm still curious why it always worked 'the old way' when setting a breakpoint.
How many changes are you making?
The Windows operating system notifies your component of file changes
in a buffer created by the FileSystemWatcher. If there are many
changes in a short time, the buffer can overflow. This causes the
component to lose track of changes in the directory, and it will only
provide blanket notification. Increasing the size of the buffer with
the InternalBufferSize property is expensive, as it comes from
non-paged memory that cannot be swapped out to disk, so keep the
buffer as small yet large enough to not miss any file change events.
To avoid a buffer overflow, use the NotifyFilter and
IncludeSubdirectories properties so you can filter out unwanted change
notifications.
Taken from MSDN
If you have a breakpoint its working, but if you don't, its not?
Are you sure that there's not something in your event handler? Like there's an exception occurring which makes the program 'feel' like its not doing anything? Can you post the code in the handler?
Separate your business logic from the FileMonitor_Created event.
In the event you should store the event parameters and return.
E.g. store the event parameters in a queue, then process these events async.
FileMonitor.Created fires when the file created and not replaced with the previous file with the same created date time.
Scenario 1) Copy paste and over right the same abc.txt File in input folder without any changes to the file creation date or file content- - File watcher doesn't recognize the file.
Scenario 2) Copy paste and over right the same file in input folder with new created date File watcher recognizes the file
So the created event works with the second scenario, it might not be your situation but looks hidden behavior for my first view.
When event is raised, file processing might take some time. During that time another file might be created and event handler will not handle the second file because it is still handling the first file. Therefore, second file is missed by FileSystemWatcher.
Solution is to separate file detection form file processing into two threads and connect then by a queue. It is producer-consumer queue.
File detection should be as short as can be. It should only detect a file, enqueue its file name in queue that file processing thread can process and close so another file can be detected. File processing thread can dequeue file name and take as much time as it needs to process it.
I explained this in detail with the code in this article: FileSystemWatcher skips some events
Context: A team of operators work with large batch files up to 10GB in size in a third party application. Each file contains thousands of images and after processing every 50 images, they hit the save button. The work place has unreliable power and if the power goes out during a save, the entire file becomes corrupt. To overcome this, I am writing a small utility using the FileSystemWatcher to detect saves and create a backup so that it may be restored without the need to reprocess the entire batch.
Problem: The FileSystemWatcher does a very good job of reporting events but there is a problem I can't pinpoint. Since the monitored files are large in size, the save process takes a few seconds. I want to to be notified once the save operation is complete. I suspect that every time the file buffer is flushed to disk, it triggers an unwanted event. The file remains locked for writing whether or not the a save is in progress so I cannot tell that way.
Creating a backup of the file DURING a save operation defeats the purpose since it corrupts the backed file.
Question:
Is there a way to use the FileSystemWatcher to be notified after the save operation is complete?
If not, how else could I reliably check to see if the file is still being written to?
Alternatives: Any alternative suggestions would be welcome as well.
There's really no direct way to do exactly what you're trying to do. The file system itself doesn't know when a save operation is completed. In logical terms, you may think of it as a series of saves simply because the user clicks the Save button multiple times, but that isn't how the file system sees it. As long as the application has the file locked for writing, as far as the file system is concerned it is still in the process of being saved.
If you think about it, it makes sense. If the application holds onto write access to the file, how would the file system know when the file is in a "corrupt" state and when it's not? Only the application writing the file knows that.
If you have access to the application writing the file, you might be able to solve this problem. Failing that, you might be able to get something with the last modified date, creating a backup only if the file isn't modified for a certain period of time, but that is bound to be buggy and unreliable.
I have a project that uses the .net FileSystemWatcher to watch a Samba network share for video files. When it sees a file, it adds it to an encode queue. When files are dequeued, they are moved to a local directory where the process then encodes the file to several different formats and spits them out to an output directory.
The problem arises because the video files are so big, that it often takes several minutes for them to copy completely into the network directory, so when a file is dequeued, it may or may not have completely finished being copied to the network share. When the file is being copied from a windows machine, I am able to work around it because trying to move a file that is still being copied throws an IOException. I simply catch the exception and retry every few seconds until it is done copying.
When a file is dropped into the Samba share from a computer running OS X however, that IOException is not thrown. Instead, a partial file is copied to the working directory which then fails to encode because it is not a valid video file.
So my question is, is there any way to make the FileSystemWatcher wait for files to be completely written before firing its "Created" event (based on this question I think the answer to that question is "no")? Alternatively, is there a way to get files copied from OS X to behave similarly to those in windows? Or do I need to find another solution for watching the Samba share? Thanks for any help.
Option 3. Your best bet is to have a process that watches the incoming share for files. When it sees a file, note its size and/or modification date.
Then, after some amount of time (like, 1 or 2 seconds), look again. Note any files that were seen before and compare their new sizes/mod dates to the one you saw last time.
Any file that has not changed for some "sufficiently long" period of time (1s? 5s?) is considered "done".
Once you have a "done" file, MOVE/rename that file to another directory. It is from THIS directory that your loading process can run. It "knows" that only files that are complete are in this directory.
By having this two stage process, you are able to later possibly add other rules for acceptance of a file, since all of those rules must pass before the file gets moved to its proper staging area (you can check format, check size, etc.) beyond a simple rule of just file existence.
Your later process can rely on file existence, both as a start mechanism and a restart mechanism. When the process restarts after failure or shut down, it can assume that any files in the second staging are either new or incomplete and take appropriate action based on its own internal state. When the processing is done it can choose to either delete the file, or move it to a "finished" area for archiving or what not.