I'm trying to create a logfile using a MemoryMappedFile. The code looks something like this:
_currentFile = MemoryMappedFile.CreateFromFile(filename, FileMode.Create, filename, MaxSize, MemoryMappedFileAccess.ReadWrite);
_currentFile.GetAccessControl().SetAccessRule(new AccessRule<MemoryMappedFileRights>("everyone", MemoryMappedFileRights.FullControl, AccessControlType.Allow));
_accessor = _currentFile.CreateViewAccessor();
The logging works just fine but when I try to put a tail on the file at the same time, I get a Permission denied.
I've tried to find an answer as to how you can allow reads on a MemoryMappedFile but I was unable to find a straight answer. So here it goes, is it possible to allow readers to access a MemoryMappedFile?
In other words, would it be possible to "tail" a MemoryMappedFile that is being actively written to?
If using a MemoryMappedFile as a log file is a bad idea to begin with. Then I'd also like to hear it. And if this is a stupid question to ask then I apologize.
Hans Passant gave the correct answer to this question so let me quote that here:
"The exact moments in time when the operating system flushes memory updates to the file, and the order in which they occur, are completely unpredictable. This makes MMFs efficient. It therefore puts a lock on the file that prevents any process from reading it. Since such a process could not see anything but stale junk. Also the reason why common logging libraries, like log4net, do not offer MMFs as one of their many possible log targets. So, yeah, bad idea."
Related
Like in the title, I performed some operations in a file with persisted Memory Mapped File use, and I saw the file become blocked, that is, it became read-only. What I should to do for the file become read-write again?
I rebooted the machine, that is, restarted the OS, but the file is blocked even.
Thanks for any help.
Like in Panagiotis Kanavos's comment, I didn't find anything for the file become read-write again. It is like this because only the program that mapped it can write to it.
By the way, the expected answer is: "I don't know".
Then:
I don't know.
I've been thinking about writing a small specialized backup app, similar to newly introduced file history in Windows 8. The basic idea is to scan some directories every N hours for changed files and copy them to another volume. The problem is, some other apps may request access to these files while they are being backed up and get an access denial, potentially causing all kinds of nasty problems.
I far as i can tell, there are several approaches to that problem:
1) Using Volume Shadow Copy service
From my point of view, the future of this thing is uncertain and it's overhead during heavy IO loads may cripple the system.
2) Using Sharing Mode when opening files
Something like this mostly works...
using (var stream = new FileStream("test.txt", FileMode.Open, FileAccess.Read,
FileShare.Delete | FileShare.ReadWrite | FileShare.Read | FileShare.Write))
{
[Copy data]
}
... until some other process request access to the same file without FileShare.Read, at which point an IOException will be thrown.
3) Using Opportunistic Lock that may be "broken" by other (write?) requests.
This behaviour of FileIO.ReadTextAsync looks exactly like what I want, but it also looks very implementation-specific and may be changed in the future. Does someone knows, how to explicitly oplock a file locally via C# or C++?
Maybe there is some simple C# method like File.TryReadBytes that provides such "polite" reading? I'm interested in the solutions that will work on Windows 7 and above.
My vote's on VSS. The main reason is that it doesn't interfere with other processes modifying your files, thus it provides consistency. A possible inconsistency pretty much defeats the purpose of a backup. The API is stable and I wouldn't worry about its future.
I thought this could've been a common question, but it has been very difficult to find an answer. I've tried searching here and other forums with no luck.
I'm writing a C# (.net version 4) program to monitor a process. It already raises an event when the process starts and when it stops, but I also need to check where is this process reading from and writing to; specially writing to since I know this process writes a large amount of data every time it runs. We process batches of data, and the path where the process writes to contains the Batch ID, which is an important piece of information to log the results of the process.
I've looked into the System.Diagnostics.Process.BeginOutputReadLine method, but since the documentation says that StandardOutput must be redirected, I'm not sure if this can be done on a process that is currently running, or if it affects the write operation originally intended by the process.
It is a console application in C#. If anyone have any idea on how to do this, it would be much appreciated.
Thanks in advance!
Output redirection would only help you solve the problem of intercepting the process' standard output stream. This would have no effect on read/write operations to other files or streams that the program would use.
The easiest way to do this would be to avoid reverse engineering this information and exert some control over where the process writes its data (e.g. pass a command line parameter to it to specify the output path and you can monitor that output path yourself).
If that is impossible for some reason, you can look into these approaches, all of which are quite advanced and have various drawbacks:
Use Detours to launch the process and redirect calls to CreateFile to a function that you define (e.g. you could call into some other function to track the file name that it used and then call the real CreateFile). Note that a license to use Detours costs money and it requires you to build an an unmanaged DLL to define your replacement function.
Read the data from the Microsoft-Windows-Kernel-File event tracing provider. This provider tracks all file operations for everything on the system. Using this data requires advanced knowledge of ETW and a lot of P/Invoke calls if you are trying to consume it from C#.
Enumerate the open handles of the process once it is started. A previous stackoverflow.com question has several possible solutions. Note that this is not foolproof as it only gives you a snapshot of the activity at a point in time (e.g. the process may open and close handles too quickly for you to observe it between calls to enumerate them) and most of those answers require calling into undocumented functions.
I came across this implementation recently: DetectOpenFiles but i have not used and/or test it. Feel free to try it. It seems to deliver open file handle information for a given process id. Looking forward to read your experience with it! ;-)
In my code, I have these lines:
XmlWriterSettings writerSettings = new XmlWriterSettings();
writerSettings.Indent = true;
XmlWriter writer = XmlWriter.Create(filename, writerSettings);
document.Save(writer);
This works fine when filename does not exist. But when it does, I get this error (on the 3rd line, not the 4th):
System.IO.IOException: Sharing violation on path [the file path]
I want to overwrite the file if it already exists. How do I do this?
If you look carefully at the IOException, it says that it's a "sharing violation". This means that while you are trying to access this file, another program is using it. Usually, it's not much of a problem with reading, but with writing to files this can happen quite a lot. You should:
Try to find out if some other program is using this file, what the program is, and why it's doing so. It's possible that some programs (especially those written in languages without good garbage handling capabilities) were accessing the file and then did not close the IO stream, thus locking up the file. There are also some utilities (if my memory serves me correctly) that allow you to see what processes are using a certain file - just google it.
There's a possibility that when you were debugging your program, you may have killed the process or something (I do that sometimes), and the IO stream may have not been closed. For this, the easiest fix (as far as I know) is just a reboot.
Alternatively, the issue may be coming from your own code. However, as you're writing in C#, and garbage collection, along with the IO capabilities, usually prevent such problems, you might have forgotten to close a file stream somewhere. I do this sometimes, and it takes quite a while to find the location of the bug, even though the fix is nearly instant. If you step through your program and utilize watches to keep track of your IO operations, it should be relatively simple to find such a bug.
Good luck!
The problem isn't that the file exists, but that it is in use by a different program (or your own program). If it was simply that the file existed it would be overwritten and cause no exception.
If it's your program that has created the file that exists already, it's likely that you haven't disposed properly the object that created the file, so the file is still open.
Try using the overload of XmlWriter.Create that accepts a Stream, and pass in a FileStream from File.Create(filename)...
-What is the most foolproof way of ensuring the folder or file I want to manipulate is accessible (not read-only)?
-I know I can use ACL to add/set entries (make the file/folder non-readonly), but how would I know if I need to use security permissions to ensure file access? Or can I just add this in as an extra measure and handle the exception/negative scenario?
-How do I know when to close or just flush a stream? For example, should I try to use the streams once in a method and then flush/close/dipose at the end? If I use dispose(), do I still need to call flush() and close() explicitly?
I ask this question because constantly ensuring a file is available is a core requirement but it is difficult to guarantee this, so some tips in the design of my code would be good.
Thanks
There is no way to guarantee access to a file. I know this isn't a popular response but it's 100% true. You can never guarantee access to a file even if you have an exclusive non-sharing open on a Win32 machine.
There are too many ways this can fail that you simply cannot control. The classic example is a file opened over the network. Open it any way you'd like with any account, I'll simply walk over and yank the network cable. This will kill your access to the file.
I'm not saying this to be mean or arrogant. I'm saying this to make sure that people understand that operating on the file system is a very dangerous operation. You must accept that the operation can and will fail. It's imperative that you have a fallback scenario for any operation that touches disk.
-What is the most foolproof way of ensuring the folder or file I want to manipulate is accessible (not read-only)?
Opening them in write-mode?
Try and write a new file into the folder and catch any exceptions. Along with that do the normally sanity checks like folder/files exists etc.
You should never change the folder security in code as the environment could drastically change and cause major headaches. Rather ensure that the security is well documented and configured before hand. ALternatively use impersonation in your own code to ensure you are always running the required code as a user with full permissions to the folder/file.
Never call Dispose() unless you have no other choice. You always flush before closing the file or when you want to commit the content of the stream to the file/disk. The choice of when to do it depends on the amount of data that needs to be written and the time involved in writing the data.
100% foolproof way to ensure a folder is writable - create a file, close it, verify it is there, then delete it. A little tedious, but you asked for foolproof =)
Your better bet, which covers your question about ACL, is to handle the various exceptions if you cannot write to a file.
Also, I always call Close explicitly unless I need to read from a file before I'm done writing it (in which case I call flush then close).
Flush() - Synchronizes the in-memory buffer with the disk. Call when you want to write the buffer to the disk but keep the file open for further use.
Dispose(bool) - Releases the unmanaged resource (i.e. the OS file handle) and, if passed true, also releases the managed resources.
Close() - Calls Dispose(true) on the object.
Also, Dispose flushes the data before closing the handle so there is no need to call flush explicitly (although it might be a good idea to be flushing frequently anyway, depending on the amount and type of data you're handling).
If you're doing relatively atomic operations to files and don't need a long-running handle, the "using" paradigm is useful to ensure you are handling files properly, e.g.:
using (StreamReader reader = new StreamReader("filepath"))
{
// Do some stuff
} // CLR automagically handles flushing and releasing resources