I have a question regarding the viability of using FileSystemWatcher in my particular application. What I have is an application that monitors folder creation for a lab test machine. The machine processes vials of mash to determine how much alcohol exists in them. When the machine completes its analysis it produces a report which it saves to a hard drive location. Within the folder that the machine creates are two files which I read with my current process and write the values to database tables.
My current process however has high overhead and basically re-reads the directory which stores the test folders each time it runs. Over time the folder will become so large that the reading process will probably consume the resources on the computer.
So I am thinking about changing the code to use FileSystemWatcher, but it has been suggested before that when many files are created in the path that watcher is watching those files can be missed. When a folder is created by the lab system it creates about 136 files in that folder.
Will this work for monitoring when I am only interested in two CSV files inside that folder?
Related
I am writing a tool that listens a location, preferably remote location, and if a new folder or file created, it will download it to a local location.
Currently I am listening the remote folder with FileSystemWatcher, when a new folder/file created, I start a timer and if timer reaches X minutes it starts to copy it to local.
Creating a new folder or file in "watched " folder triggers FileSystemWatcher.Changed but it sometimes fail if there are a lot of sub-directories and if there is a large file copying to watched folder, it only detects it when copying started and my timer can finish until it is finished.
So:
I have 3 remote computers/locations, A,B,C
A starts to copy some folders/files to B and
C listens to B .
How can C check if A is finished copying with or without FileSystemWatcher?
I don't want to constantly compare B and C and copy rest of the files.
I checked other questions but they don't answer or I already did implement those solutions.
1, 2,3
I think you are asking about the system change journal. That said, there will always be cases where the file is in use, has been deleted, updated, etc. between the time you detect you need to copy it and when you really start copying. Here's an old but accurate article that can give you more details.
http://www.microsoft.com/msj/0999/journal/journal.aspx
From the article abstract:
"The Windows 2000 Change Journal is a database that contains a list of every change made to the files or directories on an NTFS 5.0 volume. Each volume has its own Change Journal database that contains records reflecting the changes occurring to that volume's files and directories."
Scroll down to the heading ReasonMask and ReturnOnlyOnClose
We're currently building a WebService that will run under IIS.
Our current dilemma is where to store uploaded files?
Up until now we saved the files directly under the physical path of the virtual directory but than we found out that the application pool restarts when files are deleted under one of its physical paths.
We think of storing the files under the ProgramData folder but we're afraid of breaking changes with new windows updates.
Where would be a correct and safe place to store the files?
Is the ProgramData good enough?
I really dont understand why you are putting this files directly in the root? just create a folder called Files instead and put your files there. then your application pool doesnt restart:)
or else read up how to create your own mini-CDN:)
http://www.saotn.org/create-cdn-using-iis-outbound-rules/
and DONT use the programdata.it is designed for userdata and not to drive your webpage
Really? Deleting a file restarts the application pool? What are you doing with these file? Are they resources of your application? Basically saving the files in any partition different from the system's partition should be fine. Even saving them in a folder inside your application's physical path...deleting the files shouldn't cause the application pool to recycle. But, a safer solution would be to save them in a separate server if you happen to need several instances of your web service in a load balanced environment
I'm writing an application that runs on a file server and monitors files being dropped in a shared folder.
I'm using a FileSystem watcher and it's working well with files, however I would like to process folders as well so if someone drops a folder into the share then it gets zipped up and treated like the files.
I don't know however how to check when all the files in the directory (and sub directories) have finished copying.
My best idea so far has been to start a timer and test every 10 seconds to see if the contents is different from the previous 10 seconds and if any of the files are still locked. Then when no files are locked and the file contents is the same after 10 seconds then process the folder.
Is there a better way of doing this?
I have a project that uses the .net FileSystemWatcher to watch a Samba network share for video files. When it sees a file, it adds it to an encode queue. When files are dequeued, they are moved to a local directory where the process then encodes the file to several different formats and spits them out to an output directory.
The problem arises because the video files are so big, that it often takes several minutes for them to copy completely into the network directory, so when a file is dequeued, it may or may not have completely finished being copied to the network share. When the file is being copied from a windows machine, I am able to work around it because trying to move a file that is still being copied throws an IOException. I simply catch the exception and retry every few seconds until it is done copying.
When a file is dropped into the Samba share from a computer running OS X however, that IOException is not thrown. Instead, a partial file is copied to the working directory which then fails to encode because it is not a valid video file.
So my question is, is there any way to make the FileSystemWatcher wait for files to be completely written before firing its "Created" event (based on this question I think the answer to that question is "no")? Alternatively, is there a way to get files copied from OS X to behave similarly to those in windows? Or do I need to find another solution for watching the Samba share? Thanks for any help.
Option 3. Your best bet is to have a process that watches the incoming share for files. When it sees a file, note its size and/or modification date.
Then, after some amount of time (like, 1 or 2 seconds), look again. Note any files that were seen before and compare their new sizes/mod dates to the one you saw last time.
Any file that has not changed for some "sufficiently long" period of time (1s? 5s?) is considered "done".
Once you have a "done" file, MOVE/rename that file to another directory. It is from THIS directory that your loading process can run. It "knows" that only files that are complete are in this directory.
By having this two stage process, you are able to later possibly add other rules for acceptance of a file, since all of those rules must pass before the file gets moved to its proper staging area (you can check format, check size, etc.) beyond a simple rule of just file existence.
Your later process can rely on file existence, both as a start mechanism and a restart mechanism. When the process restarts after failure or shut down, it can assume that any files in the second staging are either new or incomplete and take appropriate action based on its own internal state. When the processing is done it can choose to either delete the file, or move it to a "finished" area for archiving or what not.
I've been working on a program to monitor a network folder to find out which spreadsheets our company uses are the most popular. I'm using the FileSystemWatcher class in C# to do the monitoring. I've noticed I'm getting updates to files that are in folders that my user does not have permission to browse. I understand that my software is subscribing to a list of updates done by other system software and not actually browsing those files itself, but is this functionality intentional or is it a bug?
The FileSystemWatcher is intended to monitor for any changes, not just a user opening the file.
EDIT: I'm pretty sure this is done by design. Think of trying to have a program check a network location for updates. You might not want the user to have access to that file location, but you want to be able to check for file changes, and download new files when they are available.
You may also have programs (like BizTalk) generating or editing files that other programs need to access, so these other programs just sit there and watch for file changes.