Faulty file in use error - c#

This code is the first code in my Form_Load method:
DirectoryInfo dir =new DirectoryInfo("d:\\themes.thumb");
string[] animals = new string []
{
"Snakes",
"SnowyOwls",
"Tigers",
"TropicalFish",
"WildBeauty",
"Wolves"
};
foreach (FileInfo fil in dir.GetFiles())
{
for(int ii=0;ii<animals.Length;ii++)
{
if (fil.Name.StartsWith(animals[ii]))
{
try
{
fil.Replace(fil.FullName,fil.FullName.Replace(fil.Name,"Animals-" + fil.Name));
}
catch
{
}
}
}
and I'm getting the following error whenever if (fil.Name.StartsWith(animals[ii])) is true:
The process cannot access the file because it is being used by another process.
What is wrong as I have not opened any files before this code?

You should seperate your reading logic from your update logic.
for example:
var replacements = dir.GetFiles()
.Where(file => animals.Any(animal => file.Name.StartsWith(animal)))
.Select(file => new
{
OldFullName = file.FullName,
NewFullName = file.FullName.Replace(file.Name, "Animals-" + file.Name)
})
.ToList();
foreach (var replacement in replacements)
{
File.Move(replacement.OldFullName, replacement.NewFullName);
}
Your replace logic has some subtle bugs (what happens with files that are in a folder called "Wolves" for example?) you may wan to work that out.

It looks like you are misunderstanding how to use the FileInfo.Replace method.
fil.Replace(fil.FullName,fil.FullName.Replace(fil.Name,"Animals-" + fil.Name));
Here you are actually trying to overwrite fil's contents with itself. That explains the error message.
You might want to read the documentation a bit more closely.
EDIT:
To be absolutely clear: FileInfo.Replace is not meant to be used to perform file renames. It's meant to replace file contents. To perform a rename, you use FileInfo.MoveTo.

Get LockHunter. It's a free tool which shows you which process is holding onto a particular file or folder. I found it really useful.
Microsoft Process Explorer is also free and can also find open handles (Ctrl+F) by name.

Related

Continue Foreach Loop On Error

I want to continue my code when error comes up , but i dont know how ...
here's my code :
foreach(string path in Directory.GetDirectories(#"C:\", "*.*", SearchOption.AllDirectories)
{
Console.WriteLine(path);
}
And the error comes on foreach(string path in Directory.GetDirectories(#"C:\", "*.*", SearchOption.AllDirectories) and i don't know how to continue this loop
and the error :
Unauthorized access
And even i run my code as Administrator this error comes up again
Thanks,
The best is to use recursive search and not using SearchOption.AllDirectories, but rather SearchOption.TopDirectoryOnly
If you use SearchOption.AllDirectories, one access violation will break your entire loop even before any file/directory is processed. But if you use SearchOption.TopDirectoryOnly, you only skip what is inaccessible.
Thus, to do it, you can create a method which receives a directory path as input. And in that method, if the input directory have child directory(ies) (see Directory.GetDirectories(string path) method, you call the method again for each child directory (recursive call) before you process all the files in the directory. Else, get the files (see Directory.GetFiles) in the directory and process them immediately.
Then for the method above, one way is to prevent the code crash when you cannot access certain file/directory is by using try-catch block for each child directory reading and file reading. This way, if one file/folder cannot be accessed, your code will still be running, finding the processing the next file/directory.
Alternatively, you can use Directory.GetAccessControl() per child directory check to see if you have an access to a Directory before hand (this option is rather hard though).
Edit (code added):
Something like this will do:
public static List<string> GetAllAccessibleDirectories(string path, string searchPattern) {
List<string> dirPathList = new List<string>();
try {
List<string> childDirPathList = Directory.GetDirectories(path, searchPattern, SearchOption.TopDirectoryOnly).ToList(); //use TopDirectoryOnly
if (childDirPathList == null || childDirPathList.Count <= 0) //this directory has no child
return null;
foreach (string childDirPath in childDirPathList) { //foreach child directory, do recursive search
dirPathList.Add(childDirPath); //add the path
List<string> grandChildDirPath = GetAllAccessibleDirectories(childDirPath, searchPattern);
if (grandChildDirPath != null && grandChildDirPath.Count > 0) //this child directory has children and nothing has gone wrong
dirPathList.AddRange(grandChildDirPath.ToArray()); //add the grandchildren to the list
}
return dirPathList; //return the whole list found at this level
} catch {
return null; //something has gone wrong, return null
}
}
And to call it, you can do something like this
string rootpath = #"C:\DummyRootFolder";
List<string> dirList = GetAllAccessibleDirectories(rootpath, "*.*"); //you get all accessible directories here
In the dirList you will get all the directories that you search for, and if there is access violation along the way, it will only affects sub-directories search due to the try-catch block.
Note that the rootpath is excluded in the method. But if you want to add it to the list too, you could simply do
dirList.Insert(0, path); //do this after you get dirList
There are also more complicated ways of doing this by using Directory.GetAccessControl and PermissionSet
Hope it may clarify.
According to the documentation, you should look at EnumerateDirectories for performance reasons:
https://msdn.microsoft.com/en-us/library/c1sez4sc(v=vs.110).aspx
Also, it appears that this question has already been answered before:
Directory.EnumerateFiles => UnauthorizedAccessException
Hope this helps!
How about this:
foreach (string path in Directory.GetDirectories(#"C:\", "*.*", SearchOption.AllDirectories)) {
try {
Console.WriteLine(path);
} catch (Exception ex) {
Console.WriteLine("Unable to access directories in path: " + path);
}
}

How do I extract a SubDirectory using C# DotNetZip?

I have MyFile.zip that has a main directory "MyMainFolder", and several SubDirectories inside of that, one of which I want to extract (MySubFolder)...with all of its subdirs and contents.
I am trying to figure out how to 'step-into' the MyMainFolder , so that I can extract 'MySubFolder'.
I have some code that will extract a folder as long as that folder I am looking for exists as the main folder in the zip...and I can detect if the main folder is called "MyMainFolder" so it knows to look inside that and extract from there rather than looking in the main zip root for MySubFolder).
using (ZipFile zip1 = ZipFile.Read(fileName))
{
zipFile = ZipFile.Read(#""+fileName);
var result = zipFile.Any(entry => entry.FileName.Contains("MySubFolder"));
if (result == false)
{
MessageBox.Show("MyMainFolder detected....Extracting from MyMainFolder...");
// something here that will extract JUST MySubFolder and contents
} else {
foreach (var e in selection)
{
var selection = (from e in zip1.Entries where (e.FileName).Contains("NySubfolder") select e)
e.Extract(outputDirectory);
}
}
}
So far, I have tried putting a separate using inside each part of the if-else, and I tried creating a seperate selectionX in which I tried to force the root-folder name (which will always be 'MyMainFolder' for this experiment) to be part of what it looked through, thinking I could then extract MySubFolder, but I couldn't get that to work either. I tried to incorporate several other methods I found on stackflow and elsehwere, like using parts of 'how to extract files, but ignoring the path in the zipfile' and other such posts to try and find a way to 'skip' over that main root folder when extracting. (so that it gets ONLY 'MySubFolder' (and contents) and extracts to outputDirectory (not MyMainFolder\MySubFolder...)
Any help is appreciated.
Thanks!!
Enumerating though the entire contents until I came across what I was looking for worked, but just as an experiment, I wanted to see if it could be done another way.
Since I was unable able to check the names of the subfolders inside a root folder, I figured I could just match what I was looking for as I was parsing through it, extracting only what I wanted to, and then just change the output path.
using (ZipFile zip1 = ZipFile.Read(fileName))
{
zipFile = ZipFile.Read(#""+fileName);
var result = zipFile.Any(entry => entry.FileName.Contains("MySubFolder"));
if (result == false)
{
// something here that will extract JUST MySubFolder and content
string TestX = Path.GetDirectoryName(e.FileName) ;
string MyNewPath = outputDirectory+#"\"+TestX ;
e.Extract(MyNewPath);
} else {
foreach (var e in selection)
{
var selection = (from e in zip1.Entries where (e.FileName).Contains("MySubfolder")
.select e)
e.Extract(outputDirectory);
}
}
Something like that..
Not very useful, but interesting and helped me learn a little.
(if nothing else, an example of how NOT to do things..hehe)
Thanks

C# How to loop a large set of folders and files recursively without using a huge amount of memory

I want to index all my music files and store them in a database.
I have this function that i call recusively, starting from the root of my music drive.
i.e.
start > ReadFiles(C:\music\);
ReadFiles(path){
foreach(file)
save to index;
foreach(directory)
ReadFiles(directory);
}
This works fine, but while running the program the amount of memory that is used grows and grows and.. finally my system runs out of memory.
Does anyone have a better approach that doesnt need 4GB of RAM to complete this task?
Best Regards, Tys
Alxandr's queue based solution should work fine.
If you're using .NET 4.0, you could also take advantage of the new Directory.EnumerateFiles method, which enumerates files lazily, without loading them all in memory:
void ReadFiles(string path)
{
IEnumerable<string> files =
Directory.EnumerateFiles(
path,
"*",
SearchOption.AllDirectories); // search recursively
foreach(string file in files)
SaveToIndex(file);
}
Did you check for the . and .. entries that show up in every directory except the root?
If you don't skip those, you'll have an infinite loop.
You can implement this as a queue. I think (but I'm not sure) that this will save memory. At least it will free up your stack. Whenever you find a folder you add it to the queue, and whenever you find a file you just read it. This prevents recursion.
Something like this:
Queue<string> dirs = new Queue<string>();
dirs.Enqueue("basedir");
while(dirs.Count > 0) {
foreach(directory)
dirs.Enqueue(directory);
ReadFiles();
}
Beware, though, that EnumerateFiles() will stop running if you don't have access to a file or if a path is too long or if some other exception occurs. This is what I use for the moment to solve those problems:
public static List<string> getFiles(string path, List<string> files)
{
IEnumerable<string> fileInfo = null;
IEnumerable<string> folderInfo = null;
try
{
fileInfo = Directory.EnumerateFiles(str);
}
catch
{
}
if (fileInfo != null)
{
files.AddRange(fileInfo);
//recurse through the subfolders
fileInfo = Directory.EnumerateDirectories(str);
foreach (string s in folderInfo)
{
try
{
getFiles(s, files);
}
catch
{
}
}
}
return files;
}
Example use:
List<string> files = new List<string>();
files = folder.getFiles(path, files);
My solution is based on the code at this page: http://msdn.microsoft.com/en-us/library/vstudio/bb513869.aspx.
Update: A MUCH faster method to get files recursively can be found at http://social.msdn.microsoft.com/Forums/vstudio/en-US/ae61e5a6-97f9-4eaa-9f1a-856541c6dcce/directorygetfiles-gives-me-access-denied?forum=csharpgeneral. Using Stack is new to me (I didn't even know it existed), but the method seems to work. At least it listed all files on my C and D partition with no errors.
It could be junction folders wich leads to infinite loop when doing recursion but i am not sure , check this out and see by yourself . Link: https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/mklink

How do I compare one collection of files to another in c#?

I am just learning C# (have been fiddling with it for about 2 days now) and I've decided that, for leaning purposes, I will rebuild an old app I made in VB6 for syncing files (generally across a network).
When I wrote the code in VB 6, it worked approximately like this:
Create a Scripting.FileSystemObject
Create directory objects for the source and destination
Create file listing objects for the source and destination
Iterate through the source object, and check to see if it exists in the destination
if not, create it
if so, check to see if the source version is newer/larger, and if so, overwrite the other
So far, this is what I have:
private bool syncFiles(string sourcePath, string destPath) {
DirectoryInfo source = new DirectoryInfo(sourcePath);
DirectoryInfo dest = new DirectoryInfo(destPath);
if (!source.Exists) {
LogLine("Source Folder Not Found!");
return false;
}
if (!dest.Exists) {
LogLine("Destination Folder Not Found!");
return false;
}
FileInfo[] sourceFiles = source.GetFiles();
FileInfo[] destFiles = dest.GetFiles();
foreach (FileInfo file in sourceFiles) {
// check exists on file
}
if (optRecursive.Checked) {
foreach (DirectoryInfo subDir in source.GetDirectories()) {
// create-if-not-exists destination subdirectory
syncFiles(sourcePath + subDir.Name, destPath + subDir.Name);
}
}
return true;
}
I have read examples that seem to advocate using the FileInfo or DirectoryInfo objects to do checks with the "Exists" property, but I am specifically looking for a way to search an existing collection/list of files, and not live checks to the file system for each file, since I will be doing so across the network and constantly going back to a multi-thousand-file directory is slow slow slow.
Thanks in Advance.
The GetFiles() method will only get you files that does exist. It doesn't make up random files that doesn't exist. So all you have to do is to check if it exists in the other list.
Something in the lines of this could work:
var sourceFiles = source.GetFiles();
var destFiles = dest.GetFiles();
foreach (var file in sourceFiles)
{
if(!destFiles.Any(x => x.Name == file.Name))
{
// Do whatever
}
}
Note: You have of course no guarantee that something hasn't changed after you have done the calls to GetFiles(). For example, a file could have been deleted or renamed if you try to copy it later.
Could perhaps be done nicer somehow by using the Except method or something similar. For example something like this:
var sourceFiles = source.GetFiles();
var destFiles = dest.GetFiles();
var sourceFilesMissingInDestination = sourceFiles.Except(destFiles, new FileNameComparer());
foreach (var file in sourceFilesMissingInDestination)
{
// Do whatever
}
Where the FileNameComparer is implemented like so:
public class FileNameComparer : IEqualityComparer<FileInfo>
{
public bool Equals(FileInfo x, FileInfo y)
{
return Equals(x.Name, y.Name);
}
public int GetHashCode(FileInfo obj)
{
return obj.Name.GetHashCode();
}
}
Untested though :p
One little detail, instead of
sourcePath + subDir.Name
I would use
System.IO.Path.Combine(sourcePath, subDir.Name)
Path does reliable, OS independent operations on file- and foldernames.
Also I notice optRecursive.Checked popping out of nowhere. As a matter of good design, make that a parameter:
bool syncFiles(string sourcePath, string destPath, bool checkRecursive)
And since you mention it may be used for large numbers of files, keep an eye out for .NET 4, it has an IEnumerable replacement for GetFiles() that will let you process this in a streaming fashion.

Quickest way in C# to find a file in a directory with over 20,000 files

I have a job that runs every night to pull xml files from a directory that has over 20,000 subfolders under the root. Here is what the structure looks like:
rootFolder/someFolder/someSubFolder/xml/myFile.xml
rootFolder/someFolder/someSubFolder1/xml/myFile1.xml
rootFolder/someFolder/someSubFolderN/xml/myFile2.xml
rootFolder/someFolder1
rootFolder/someFolderN
So looking at the above, the structure is always the same - a root folder, then two subfolders, then an xml directory, and then the xml file.
Only the name of the rootFolder and the xml directory are known to me.
The code below traverses through all the directories and is extremely slow. Any recommendations on how I can optimize the search especially if the directory structure is known?
string[] files = Directory.GetFiles(#"\\somenetworkpath\rootFolder", "*.xml", SearchOption.AllDirectories);
Rather than doing GetFiles and doing a brute force search you could most likely use GetDirectories, first to get a list of the "First sub folder", loop through those directories, then repeat the process for the sub folder, looping through them, lastly look for the xml folder, and finally searching for .xml files.
Now, as for performance the speed of this will vary, but searching for directories first, THEN getting to files should help a lot!
Update
Ok, I did a quick bit of testing and you can actually optimize it much further than I thought.
The following code snippet will search a directory structure and find ALL "xml" folders inside the entire directory tree.
string startPath = #"C:\Testing\Testing\bin\Debug";
string[] oDirectories = Directory.GetDirectories(startPath, "xml", SearchOption.AllDirectories);
Console.WriteLine(oDirectories.Length.ToString());
foreach (string oCurrent in oDirectories)
Console.WriteLine(oCurrent);
Console.ReadLine();
If you drop that into a test console app you will see it output the results.
Now, once you have this, just look in each of the found directories for you .xml files.
I created a recursive method GetFolders using a Parallel.ForEach to find all the folders named as the variable yourKeyword
List<string> returnFolders = new List<string>();
object locker = new object();
Parallel.ForEach(subFolders, subFolder =>
{
if (subFolder.ToUpper().EndsWith(yourKeyword))
{
lock (locker)
{
returnFolders.Add(subFolder);
}
}
else
{
lock (locker)
{
returnFolders.AddRange(GetFolders(Directory.GetDirectories(subFolder)));
}
}
});
return returnFolders;
Are there additional directories at the same level as the xml folder? If so, you could probably speed up the search if you do it yourself and eliminate that level from searching.
System.IO.DirectoryInfo root = new System.IO.DirectoryInfo(rootPath);
List<System.IO.FileInfo> xmlFiles=new List<System.IO.FileInfo>();
foreach (System.IO.DirectoryInfo subDir1 in root.GetDirectories())
{
foreach (System.IO.DirectoryInfo subDir2 in subDir1.GetDirectories())
{
System.IO.DirectoryInfo xmlDir = new System.IO.DirectoryInfo(System.IO.Path.Combine(subDir2.FullName, "xml"));
if (xmlDir.Exists)
{
xmlFiles.AddRange(xmlDir.GetFiles("*.xml"));
}
}
}
I can't think of anything faster in C#, but do you have indexing turned on for that file system?
Only way I can see that would make much difference is to change from a brute strength hunt and use some third party or OS indexing routine to speed the return. that way the search is done off line from your app.
But I would also suggest you should look at better ways to structure that data if at all possible.
Use P/Invoke on FindFirstFile/FindNextFile/FindClose and avoid overhead of creating lots of FileInfo instances.
But this will be hard work to get right (you will have to do all the handling of file vs. directory and recursion yourself). So try something simple (Directory.GetFiles(), Directory.GetDirectories()) to start with and get things working. If it is too slow look at alternatives (but always measure, too easy to make it slower).
Depending on your needs and configuration, you could utilize the Windows Search Index: https://msdn.microsoft.com/en-us/library/windows/desktop/bb266517(v=vs.85).aspx
Depending on your configuration this could increase performance greatly.
For file and directory search purpose I would want to offer use multithreading .NET library that possess a wide search opportunities.
All information about library you can find on GitHub: https://github.com/VladPVS/FastSearchLibrary
If you want to download it you can do it here: https://github.com/VladPVS/FastSearchLibrary/releases
If you have any questions please ask them.
Works really fast. Check it yourself!
It is one demonstrative example how you can use it:
class Searcher
{
private static object locker = new object();
private FileSearcher searcher;
List<FileInfo> files;
public Searcher()
{
files = new List<FileInfo>();
}
public void Startsearch()
{
CancellationTokenSource tokenSource = new CancellationTokenSource();
searcher = new FileSearcher(#"C:\", (f) =>
{
return Regex.IsMatch(f.Name, #".*[Dd]ragon.*.jpg$");
}, tokenSource);
searcher.FilesFound += (sender, arg) =>
{
lock (locker) // using a lock is obligatorily
{
arg.Files.ForEach((f) =>
{
files.Add(f);
Console.WriteLine($"File location: {f.FullName}, \nCreation.Time: {f.CreationTime}");
});
if (files.Count >= 10)
searcher.StopSearch();
}
};
searcher.SearchCompleted += (sender, arg) =>
{
if (arg.IsCanceled)
Console.WriteLine("Search stopped.");
else
Console.WriteLine("Search completed.");
Console.WriteLine($"Quantity of files: {files.Count}");
};
searcher.StartSearchAsync();
}
}
It's part of other example:
***
List<string> folders = new List<string>
{
#"C:\Users\Public",
#"C:\Windows\System32",
#"D:\Program Files",
#"D:\Program Files (x86)"
}; // list of search directories
List<string> keywords = new List<string> { "word1", "word2", "word3" }; // list of search keywords
FileSearcherMultiple multipleSearcher = new FileSearcherMultiple(folders, (f) =>
{
if (f.CreationTime >= new DateTime(2015, 3, 15) &&
(f.Extension == ".cs" || f.Extension == ".sln"))
foreach (var keyword in keywords)
if (f.Name.Contains(keyword))
return true;
return false;
}, tokenSource, ExecuteHandlers.InCurrentTask, true);
***
Moreover one can use simple static method:
List<FileInfo> files = FileSearcher.GetFilesFast(#"C:\Users", "*.xml");
Note that all methods of this library DO NOT throw UnauthorizedAccessException instead standard .NET search methods.
Furthermore fast methods of this library are performed at least in 2 times faster than simple one-thread recursive algorithm if you use multicore processor.
For those of you who want to search for a single file and you know your root directory then I suggest you keep it simple as possible. This approach worked for me.
private void btnSearch_Click(object sender, EventArgs e)
{
string userinput = txtInput.Text;
string sourceFolder = #"C:\mytestDir\";
string searchWord = txtInput.Text + ".pdf";
string filePresentCK = sourceFolder + searchWord;
if (File.Exists(filePresentCK))
{
pdfViewer1.LoadFromFile(sourceFolder+searchWord);
}
else if(! File.Exists(filePresentCK))
{
MessageBox.Show("Unable to Find file :" + searchWord);
}
txtInput.Clear();
}// end of btnSearch method

Categories