I'm writing an application that runs on a file server and monitors files being dropped in a shared folder.
I'm using a FileSystem watcher and it's working well with files, however I would like to process folders as well so if someone drops a folder into the share then it gets zipped up and treated like the files.
I don't know however how to check when all the files in the directory (and sub directories) have finished copying.
My best idea so far has been to start a timer and test every 10 seconds to see if the contents is different from the previous 10 seconds and if any of the files are still locked. Then when no files are locked and the file contents is the same after 10 seconds then process the folder.
Is there a better way of doing this?
Related
I have a question regarding the viability of using FileSystemWatcher in my particular application. What I have is an application that monitors folder creation for a lab test machine. The machine processes vials of mash to determine how much alcohol exists in them. When the machine completes its analysis it produces a report which it saves to a hard drive location. Within the folder that the machine creates are two files which I read with my current process and write the values to database tables.
My current process however has high overhead and basically re-reads the directory which stores the test folders each time it runs. Over time the folder will become so large that the reading process will probably consume the resources on the computer.
So I am thinking about changing the code to use FileSystemWatcher, but it has been suggested before that when many files are created in the path that watcher is watching those files can be missed. When a folder is created by the lab system it creates about 136 files in that folder.
Will this work for monitoring when I am only interested in two CSV files inside that folder?
I am writing a tool that listens a location, preferably remote location, and if a new folder or file created, it will download it to a local location.
Currently I am listening the remote folder with FileSystemWatcher, when a new folder/file created, I start a timer and if timer reaches X minutes it starts to copy it to local.
Creating a new folder or file in "watched " folder triggers FileSystemWatcher.Changed but it sometimes fail if there are a lot of sub-directories and if there is a large file copying to watched folder, it only detects it when copying started and my timer can finish until it is finished.
So:
I have 3 remote computers/locations, A,B,C
A starts to copy some folders/files to B and
C listens to B .
How can C check if A is finished copying with or without FileSystemWatcher?
I don't want to constantly compare B and C and copy rest of the files.
I checked other questions but they don't answer or I already did implement those solutions.
1, 2,3
I think you are asking about the system change journal. That said, there will always be cases where the file is in use, has been deleted, updated, etc. between the time you detect you need to copy it and when you really start copying. Here's an old but accurate article that can give you more details.
http://www.microsoft.com/msj/0999/journal/journal.aspx
From the article abstract:
"The Windows 2000 Change Journal is a database that contains a list of every change made to the files or directories on an NTFS 5.0 volume. Each volume has its own Change Journal database that contains records reflecting the changes occurring to that volume's files and directories."
Scroll down to the heading ReasonMask and ReturnOnlyOnClose
I have an application which has 10,000+ files in a directory. Once a day a few more files are added to the directory. In theory these files don't change. The problem is that it takes lots of time to check to make sure these files aren't in the database. I looked at FileSystemWatcher class but my app doesn't always run 24/7. It is also possible that the file timestamp may be older than the newest file. Therefore I can not count on file timestamps.
Having a false positive on a new file is better than a false negative.
I am looking for suggestions on a way to speed this up.
This is a Window 7+ and .NET app.
I download GB's of stuff every day. And I get all OCD and organize files and folders so many times during the day and it's driving me nuts.
So I plan on writing an app that detects when a file has finished downloading (to the Windows Downloads folder), and then places it in its relevant categorized folder.
E.g.:
I download an app. When the app detects that the file has finished downloading, it places it into Sub-folder Applciations. Or, when I finish downloading a Document, the document is then placed inside the Documents sub-folder of the Downloads folder.
The problem I have here is that I don't want to do this unless there is a definitive way to tell whether a file has finished downloading.
Things I have thought of:
I have thought about implementing FileSystemWatcher on the Downloads folder, and when a new file is created there, it gets added to a list. And when FileSystemWatcher detects that the file size has changed, or has been modified, it will start a timer; the purpose of this timer is to determine after x amount of seconds whether the download is complete. It does this by assuming (wrongly) that if a file's size has not increased in a specified period of time, the download is complete.
That's all I can think of. Any ideas on how this kind of thing can be accomplished?
File is blocked when it is accessed. Not every file. But you could check whether the file is open by another application. If the file is not open - this should tell you, that it has downloaded completely.
I have a project that uses the .net FileSystemWatcher to watch a Samba network share for video files. When it sees a file, it adds it to an encode queue. When files are dequeued, they are moved to a local directory where the process then encodes the file to several different formats and spits them out to an output directory.
The problem arises because the video files are so big, that it often takes several minutes for them to copy completely into the network directory, so when a file is dequeued, it may or may not have completely finished being copied to the network share. When the file is being copied from a windows machine, I am able to work around it because trying to move a file that is still being copied throws an IOException. I simply catch the exception and retry every few seconds until it is done copying.
When a file is dropped into the Samba share from a computer running OS X however, that IOException is not thrown. Instead, a partial file is copied to the working directory which then fails to encode because it is not a valid video file.
So my question is, is there any way to make the FileSystemWatcher wait for files to be completely written before firing its "Created" event (based on this question I think the answer to that question is "no")? Alternatively, is there a way to get files copied from OS X to behave similarly to those in windows? Or do I need to find another solution for watching the Samba share? Thanks for any help.
Option 3. Your best bet is to have a process that watches the incoming share for files. When it sees a file, note its size and/or modification date.
Then, after some amount of time (like, 1 or 2 seconds), look again. Note any files that were seen before and compare their new sizes/mod dates to the one you saw last time.
Any file that has not changed for some "sufficiently long" period of time (1s? 5s?) is considered "done".
Once you have a "done" file, MOVE/rename that file to another directory. It is from THIS directory that your loading process can run. It "knows" that only files that are complete are in this directory.
By having this two stage process, you are able to later possibly add other rules for acceptance of a file, since all of those rules must pass before the file gets moved to its proper staging area (you can check format, check size, etc.) beyond a simple rule of just file existence.
Your later process can rely on file existence, both as a start mechanism and a restart mechanism. When the process restarts after failure or shut down, it can assume that any files in the second staging are either new or incomplete and take appropriate action based on its own internal state. When the processing is done it can choose to either delete the file, or move it to a "finished" area for archiving or what not.