c# overwrite file, error file in use by IIS 7.5 - c#

I have a program that overwrites a certain set of files required for my website, however the traffic on my website has increased so much that i now get the error File in use. Which results in it being unable to update the file..
This program runs every 5 minutes to update the specified files.
The reason I let this program handle the writing of the file and not the program, is cause I also need to update it to a different webserver (through ftp). this way I also ensure that the file gets updated every 5 minutes, instead of when a user would take a look at page.
My question therefore is; Can i tell IIS7.5 to cache the file after (say 5 seconds to 1 minute) it has been updated? This should ensure that the next time the program runs to update the file it won't encounter any problems.

Simplest solution would be change that program that refreshes the file to store that new information in database, not in filesystem.
But if you can't use database I would take different approach, store file contents to System.Web.Caching.Cache with time when was last modified, and then check if file is changed if not use cached version, and if was changed store new contents and time in same cache variable.
Of course you will have to check if you can read the file, and only then refresh cache contents, and if you can not read the file you can simply get last version from the cache.
Initial reading of file would have to be in application_start to ensure that cache has been initialized, and there you will have to wait until you can read file to store it in the cache for the first time.
Best way to check that you can read from file is to catch exception, because lock can happen after your check, see this post : How to check for file lock?

Related

How to ensure that file is not locked before performing any modification on file?

Step 1- I am copying a file manually by reading from source then writing to target files in chunk. I keep the file handle open until all copy is not over. The handle is safely closed as copy is over.
Step 2- After copy is over, I set the time stamp, attributes, ACL and may more things.
Sometime in step 2, I get the issue that file is being used by some other process. This issue raise mostly for exe files. I got the answer which process was using that file from File locked by other process. A sper answer, OS locks the file to set the icon or some other information on file for a very minor time.
But if I go to perform step 2 without any delay after finishing step 1 then I get access error. How I can ensure that OS will not lock the file?
Looping to check for file access is not an solution as per because the locking may be happen at any point of time in step 2. Step 2 is not atomic, there I need to open/close same file multiple times.

XML file data is lost when sudden shutdown is occurred

I have an application that stores data in XML file every 500 ms using XElement object's .Save("path") method.
The problem is : when a sudden shutdown is occurred the content of the file is deleted so on the next run of the application the file can not be used.
How to prevent that / make sure the data will not be lost?
P.S: I'm using .NET 2010 C# under windows 7
I've made an experiment: instead of writing to the same data.xml file I've created (by copying from the original file) a new file each time and when the power was off while copying from data.xml file it would corrupt all previously created files?!?!?
Let's assume your file is data.xml. Instead of writing to data.xml all the time, write to a temporary file data.xml.tmp, and when finished, rename it to data.xml. But renaming will not work if you already have a data.xml file, so you will need to delete it first and then rename the temporary file.
That way, data.xml will contain the last safe data. If you have a sudden shutdown, the incomplete file will be the temporary data.xml.tmp. If your program tries to read the file later on and there is no data.xml file, that means the shutdown happened between the delete and rename operations, so you will have to read the temporary file instead. We know it is safe because otherwise there would be a data.xml file.
You can use a 2-phase commit:
Write the new XML to a file with a different name
Delete the old file
Rename to new file to the old name
This way, there will always be at least one complete file.
If you restart, and the standard name doesn't exist, check for the different name.
This one could be a life savior but with little more efforts. There should be a separate process which does
Take backup to its stash automatically whenever the file gets updated.
It internally maintains two versions in a linked list.
If the file gets updated, then the latest shall be updated to HEAD using linkedList.AddFirst() and the least version pointed by TAIL could be removed by linkedList.RemoveLast().
And of course, it should scan and load the stash about the latest version available in the stash during startup.
In the hard shutdown scenario, when the system starts up next time, this process should check whether the file is valid / corrupted. If corrupted, then restore the latest from HEAD and subscribe for FileChanged notification using a simple FileSystemWatcher.
This approach is well tested.
Problems seen
What if the Hard shutdown happens while updating the HEAD?
-- Well, there is another version we have it in the stash next to HEAD
What if the Hard shutdown happens while updating the HEAD when the stash is empty? -- We know that the file was valid while updating HEAD. The process shall try copying again at next startup since it is not corrupted.
What if the stash is empty and the file has been corrupted? -- This is the death pit and no solution is available for this. But this scenario occurs only when you deploy this recovery process after the file corruption happened.

Changing web/app.config at runtime

When I change a field in an web or app.config of a C# project, would that value automatically feed into the program without any restarts or interruptions in the program? Is the program always fetching from the config files every time that field is requested or is it cached by the program somewhere. How does this work?
I want a situation where I would change the value in the config, and want that value automatically pulled by the application like instantly. Changes, and program pulls that value instantly.
ASP.NET monitors web.config file and will recycle AppDomain when it notice changes. It will wait for the current requests to be processed and will queue any new requests coming.
So yes, the changes will be pulled be the application, but not instantly and not without interruption (although that depends on your definition of 'instantly').
You are supposed to change the web.config through code, as it results in restart of AppDomain. You should make a new xml file for setting and change it through code.

Determining copy/write progress with FileSystemWatcher in C#

Context: A team of operators work with large batch files up to 10GB in size in a third party application. Each file contains thousands of images and after processing every 50 images, they hit the save button. The work place has unreliable power and if the power goes out during a save, the entire file becomes corrupt. To overcome this, I am writing a small utility using the FileSystemWatcher to detect saves and create a backup so that it may be restored without the need to reprocess the entire batch.
Problem: The FileSystemWatcher does a very good job of reporting events but there is a problem I can't pinpoint. Since the monitored files are large in size, the save process takes a few seconds. I want to to be notified once the save operation is complete. I suspect that every time the file buffer is flushed to disk, it triggers an unwanted event. The file remains locked for writing whether or not the a save is in progress so I cannot tell that way.
Creating a backup of the file DURING a save operation defeats the purpose since it corrupts the backed file.
Question:
Is there a way to use the FileSystemWatcher to be notified after the save operation is complete?
If not, how else could I reliably check to see if the file is still being written to?
Alternatives: Any alternative suggestions would be welcome as well.
There's really no direct way to do exactly what you're trying to do. The file system itself doesn't know when a save operation is completed. In logical terms, you may think of it as a series of saves simply because the user clicks the Save button multiple times, but that isn't how the file system sees it. As long as the application has the file locked for writing, as far as the file system is concerned it is still in the process of being saved.
If you think about it, it makes sense. If the application holds onto write access to the file, how would the file system know when the file is in a "corrupt" state and when it's not? Only the application writing the file knows that.
If you have access to the application writing the file, you might be able to solve this problem. Failing that, you might be able to get something with the last modified date, creating a backup only if the file isn't modified for a certain period of time, but that is bound to be buggy and unreliable.

Why doesn't OS X lock files like windows does when copying to a Samba share?

I have a project that uses the .net FileSystemWatcher to watch a Samba network share for video files. When it sees a file, it adds it to an encode queue. When files are dequeued, they are moved to a local directory where the process then encodes the file to several different formats and spits them out to an output directory.
The problem arises because the video files are so big, that it often takes several minutes for them to copy completely into the network directory, so when a file is dequeued, it may or may not have completely finished being copied to the network share. When the file is being copied from a windows machine, I am able to work around it because trying to move a file that is still being copied throws an IOException. I simply catch the exception and retry every few seconds until it is done copying.
When a file is dropped into the Samba share from a computer running OS X however, that IOException is not thrown. Instead, a partial file is copied to the working directory which then fails to encode because it is not a valid video file.
So my question is, is there any way to make the FileSystemWatcher wait for files to be completely written before firing its "Created" event (based on this question I think the answer to that question is "no")? Alternatively, is there a way to get files copied from OS X to behave similarly to those in windows? Or do I need to find another solution for watching the Samba share? Thanks for any help.
Option 3. Your best bet is to have a process that watches the incoming share for files. When it sees a file, note its size and/or modification date.
Then, after some amount of time (like, 1 or 2 seconds), look again. Note any files that were seen before and compare their new sizes/mod dates to the one you saw last time.
Any file that has not changed for some "sufficiently long" period of time (1s? 5s?) is considered "done".
Once you have a "done" file, MOVE/rename that file to another directory. It is from THIS directory that your loading process can run. It "knows" that only files that are complete are in this directory.
By having this two stage process, you are able to later possibly add other rules for acceptance of a file, since all of those rules must pass before the file gets moved to its proper staging area (you can check format, check size, etc.) beyond a simple rule of just file existence.
Your later process can rely on file existence, both as a start mechanism and a restart mechanism. When the process restarts after failure or shut down, it can assume that any files in the second staging are either new or incomplete and take appropriate action based on its own internal state. When the processing is done it can choose to either delete the file, or move it to a "finished" area for archiving or what not.

Categories