I've written the following routine to manually traverse through a directory and calculate its size in C#/.NET:
protected static float CalculateFolderSize(string folder)
{
float folderSize = 0.0f;
try
{
//Checks if the path is valid or not
if (!Directory.Exists(folder))
return folderSize;
else
{
try
{
foreach (string file in Directory.GetFiles(folder))
{
if (File.Exists(file))
{
FileInfo finfo = new FileInfo(file);
folderSize += finfo.Length;
}
}
foreach (string dir in Directory.GetDirectories(folder))
folderSize += CalculateFolderSize(dir);
}
catch (NotSupportedException e)
{
Console.WriteLine("Unable to calculate folder size: {0}", e.Message);
}
}
}
catch (UnauthorizedAccessException e)
{
Console.WriteLine("Unable to calculate folder size: {0}", e.Message);
}
return folderSize;
}
I have an application which is running this routine repeatedly for a large number of folders. I'm wondering if there's a more efficient way to calculate the size of a folder with .NET? I didn't see anything specific in the framework. Should I be using P/Invoke and a Win32 API? What's the most efficient way of calculating the size of a folder in .NET?
No, this looks like the recommended way to calculate directory size, the relevent method included below:
public static long DirSize(DirectoryInfo d)
{
long size = 0;
// Add file sizes.
FileInfo[] fis = d.GetFiles();
foreach (FileInfo fi in fis)
{
size += fi.Length;
}
// Add subdirectory sizes.
DirectoryInfo[] dis = d.GetDirectories();
foreach (DirectoryInfo di in dis)
{
size += DirSize(di);
}
return size;
}
You would call with the root as:
Console.WriteLine("The size is {0} bytes.", DirSize(new DirectoryInfo(targetFolder));
...where targetFolder is the folder-size to calculate.
DirectoryInfo dirInfo = new DirectoryInfo(#strDirPath);
long dirSize = await Task.Run(() => dirInfo.EnumerateFiles( "*", SearchOption.AllDirectories).Sum(file => file.Length));
I do not believe there is a Win32 API to calculate the space consumed by a directory, although I stand to be corrected on this. If there were then I would assume Explorer would use it. If you get the Properties of a large directory in Explorer, the time it takes to give you the folder size is proportional to the number of files/sub-directories it contains.
Your routine seems fairly neat & simple. Bear in mind that you are calculating the sum of the file lengths, not the actual space consumed on the disk. Space consumed by wasted space at the end of clusters, file streams etc, are being ignored.
public static long DirSize(DirectoryInfo dir)
{
return dir.GetFiles().Sum(fi => fi.Length) +
dir.GetDirectories().Sum(di => DirSize(di));
}
The real question is, what do you intend to use the size for?
Your first problem is that there are at least four definitions for "file size":
The "end of file" offset, which is the number of bytes you have to skip to go from the beginning to the end of the file.
In other words, it is the number of bytes logically in the file (from a usage perspective).
The "valid data length", which is equal to the offset of the first byte which is not actually stored.
This is always less than or equal to the "end of file", and is a multiple of the cluster size.
For example, a 1 GB file can have a valid data length of 1 MB. If you ask Windows to read the first 8 MB, it will read the first 1 MB and pretend the rest of the data was there, returning it as zeros.
The "allocated size" of a file. This is always greater than or equal to the "end of file".
This is the number of clusters that the OS has allocated for the file, multiplied by the cluster size.
Unlike the case where the "end of file" is greater than the "valid data length", The excess bytes are not considered to be part of the file's data, so the OS will not fill a buffer with zeros if you try to read in the allocated region beyond the end of the file.
The "compressed size" of a file, which is only valid for compressed (and sparse?) files.
It is equal to the size of a cluster, multiplied by the number of clusters on the volume that are actually allocated to this file.
For non-compressed and non-sparse files, there is no notion of "compressed size"; you would use the "allocated size" instead.
Your second problem is that a "file" like C:\Foo can actually have multiple streams of data.
This name just refers to the default stream. A file might have alternate streams, like C:\Foo:Bar, whose size is not even shown in Explorer!
Your third problem is that a "file" can have multiple names ("hard links").
For example, C:\Windows\notepad.exe and C:\Windows\System32\notepad.exe are two names for the same file. Any name can be used to open any stream of the file.
Your fourth problem is that a "file" (or directory) might in fact not even be a file (or directory):
It might be a soft link (a "symbolic link" or a "reparse point") to some other file (or directory).
That other file might not even be on the same drive. It might even point to something on the network, or it might even be recursive! Should the size be infinity if it's recursive?
Your fifth is that there are "filter" drivers that make certain files or directories look like actual files or directories, even though they aren't. For example, Microsoft's WIM image files (which are compressed) can be "mounted" on a folder using a tool called ImageX, and those do not look like reparse points or links. They look just like directories -- except that the're not actually directories, and the notion of "size" doesn't really make sense for them.
Your sixth problem is that every file requires metadata.
For example, having 10 names for the same file requires more metadata, which requires space. If the file names are short, having 10 names might be as cheap as having 1 name -- and if they're long, then having multiple names can use more disk space for the metadata. (Same story with multiple streams, etc.)
Do you count these, too?
var size = new DirectoryInfo("E:\\").GetDirectorySize();
and here's the code behind this Extension method
public static long GetDirectorySize(this System.IO.DirectoryInfo directoryInfo, bool recursive = true)
{
var startDirectorySize = default(long);
if (directoryInfo == null || !directoryInfo.Exists)
return startDirectorySize; //Return 0 while Directory does not exist.
//Add size of files in the Current Directory to main size.
foreach (var fileInfo in directoryInfo.GetFiles())
System.Threading.Interlocked.Add(ref startDirectorySize, fileInfo.Length);
if (recursive) //Loop on Sub Direcotries in the Current Directory and Calculate it's files size.
System.Threading.Tasks.Parallel.ForEach(directoryInfo.GetDirectories(), (subDirectory) =>
System.Threading.Interlocked.Add(ref startDirectorySize, GetDirectorySize(subDirectory, recursive)));
return startDirectorySize; //Return full Size of this Directory.
}
More faster! Add COM reference "Windows Script Host Object..."
public double GetWSHFolderSize(string Fldr)
{
//Reference "Windows Script Host Object Model" on the COM tab.
IWshRuntimeLibrary.FileSystemObject FSO = new IWshRuntimeLibrary.FileSystemObject();
double FldrSize = (double)FSO.GetFolder(Fldr).Size;
Marshal.FinalReleaseComObject(FSO);
return FldrSize;
}
private void button1_Click(object sender, EventArgs e)
{
string folderPath = #"C:\Windows";
Stopwatch sWatch = new Stopwatch();
sWatch.Start();
double sizeOfDir = GetWSHFolderSize(folderPath);
sWatch.Stop();
MessageBox.Show("Directory size in Bytes : " + sizeOfDir + ", Time: " + sWatch.ElapsedMilliseconds.ToString());
}
It appears, that following method performs your task faster, than recursive function:
long size = 0;
DirectoryInfo dir = new DirectoryInfo(folder);
foreach (FileInfo fi in dir.GetFiles("*.*", SearchOption.AllDirectories))
{
size += fi.Length;
}
A simple console application test shows, that this loop sums files faster, than recursive function, and provides the same result. And you probably want to use LINQ methods (like Sum()) to shorten this code.
this solution works very well.
it's collecting all the sub folders:
Directory.GetFiles(#"MainFolderPath", "*", SearchOption.AllDirectories).Sum(t => (new FileInfo(t).Length));
An alternative to Trikaldarshi's one line solution. (It avoids having to construct FileInfo objects)
long sizeInBytes = Directory.EnumerateFiles("{path}","*", SearchOption.AllDirectories).Sum(fileInfo => new FileInfo(fileInfo).Length);
I've been fiddling with VS2008 and LINQ up until recently and this compact and short method works great for me (example is in VB.NET; requires LINQ / .NET FW 3.5+ of course):
Dim size As Int64 = (From strFile In My.Computer.FileSystem.GetFiles(strFolder, _
FileIO.SearchOption.SearchAllSubDirectories) _
Select New System.IO.FileInfo(strFile).Length).Sum()
Its short, it searches sub-directories and is simple to understand if you know LINQ syntax. You could even specify wildcards to search for specific files using the third parameter of the .GetFiles function.
I'm not a C# expert but you can add the My namespace on C# this way.
I think this way of obtaining a folder size is not only shorter and more modern than the way described on Hao's link, it basically uses the same loop-of-FileInfo method described there in the end.
This it the best way to calculate the size of a directory. Only other way would still use recursion but be a bit easier to use and isn't as flexible.
float folderSize = 0.0f;
FileInfo[] files = Directory.GetFiles(folder, "*", SearchOption.AllDirectories);
foreach(FileInfo file in files) folderSize += file.Length;
I extended #Hao's answer using the same counting principal but supporting richer data return, so you get back size, recursive size, directory count, and recursive directory count, N levels deep.
public class DiskSizeUtil
{
/// <summary>
/// Calculate disk space usage under <paramref name="root"/>. If <paramref name="levels"/> is provided,
/// then return subdirectory disk usages as well, up to <paramref name="levels"/> levels deep.
/// If levels is not provided or is 0, return a list with a single element representing the
/// directory specified by <paramref name="root"/>.
/// </summary>
/// <returns></returns>
public static FolderSizeInfo GetDirectorySize(DirectoryInfo root, int levels = 0)
{
var currentDirectory = new FolderSizeInfo();
// Add file sizes.
FileInfo[] fis = root.GetFiles();
currentDirectory.Size = 0;
foreach (FileInfo fi in fis)
{
currentDirectory.Size += fi.Length;
}
// Add subdirectory sizes.
DirectoryInfo[] dis = root.GetDirectories();
currentDirectory.Path = root;
currentDirectory.SizeWithChildren = currentDirectory.Size;
currentDirectory.DirectoryCount = dis.Length;
currentDirectory.DirectoryCountWithChildren = dis.Length;
currentDirectory.FileCount = fis.Length;
currentDirectory.FileCountWithChildren = fis.Length;
if (levels >= 0)
currentDirectory.Children = new List<FolderSizeInfo>();
foreach (DirectoryInfo di in dis)
{
var dd = GetDirectorySize(di, levels - 1);
if (levels >= 0)
currentDirectory.Children.Add(dd);
currentDirectory.SizeWithChildren += dd.SizeWithChildren;
currentDirectory.DirectoryCountWithChildren += dd.DirectoryCountWithChildren;
currentDirectory.FileCountWithChildren += dd.FileCountWithChildren;
}
return currentDirectory;
}
public class FolderSizeInfo
{
public DirectoryInfo Path { get; set; }
public long SizeWithChildren { get; set; }
public long Size { get; set; }
public int DirectoryCount { get; set; }
public int DirectoryCountWithChildren { get; set; }
public int FileCount { get; set; }
public int FileCountWithChildren { get; set; }
public List<FolderSizeInfo> Children { get; set; }
}
}
public static long GetDirSize(string path)
{
try
{
return Directory.EnumerateFiles(path).Sum(x => new FileInfo(x).Length)
+
Directory.EnumerateDirectories(path).Sum(x => GetDirSize(x));
}
catch
{
return 0L;
}
}
As far as the best algorithm goes you probably have it right. I would recommend that you unravel the recursive function and use a stack of your own (remember a stack overflow is the end of the world in a .Net 2.0+ app, the exception can not be caught IIRC).
The most important thing is that if you are using it in any form of a UI put it on a worker thread that signals the UI thread with updates.
To improve the performance, you could use the Task Parallel Library (TPL).
Here is a good sample: Directory file size calculation - how to make it faster?
I didn't test it, but the author says it is 3 times faster than a non-multithreaded method...
Directory.GetFiles(#"C:\Users\AliBayat","*",SearchOption.AllDirectories)
.Select (d => new FileInfo(d))
.Select (d => new { Directory = d.DirectoryName,FileSize = d.Length} )
.ToLookup (d => d.Directory )
.Select (d => new { Directory = d.Key,TotalSizeInMB =Math.Round(d.Select (x =>x.FileSize)
.Sum () /Math.Pow(1024.0,2),2)})
.OrderByDescending (d => d.TotalSizeInMB).ToList();
Calling GetFiles with SearchOption.AllDirectories returns the full name of all the files in all the subdirectories of the specified directory. The OS represents the size of files in bytes. You can retrieve the file’s size from its Length property. Dividing it by 1024 raised to the power of 2 gives you the size of the file in megabytes. Because a directory/folder can contain many files, d.Select(x => x.FileSize) returns a collection of file sizes measured in megabytes. The final call to Sum() finds the total size of the files in the specified directory.
Update: the filterMask="." does not work with files without extension
Multi thread example to calculate directory size from Microsoft Docs, which would be faster
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
public class Example
{
public static void Main()
{
long totalSize = 0;
String[] args = Environment.GetCommandLineArgs();
if (args.Length == 1) {
Console.WriteLine("There are no command line arguments.");
return;
}
if (! Directory.Exists(args[1])) {
Console.WriteLine("The directory does not exist.");
return;
}
String[] files = Directory.GetFiles(args[1]);
Parallel.For(0, files.Length,
index => { FileInfo fi = new FileInfo(files[index]);
long size = fi.Length;
Interlocked.Add(ref totalSize, size);
} );
Console.WriteLine("Directory '{0}':", args[1]);
Console.WriteLine("{0:N0} files, {1:N0} bytes", files.Length, totalSize);
}
}
// The example displaysoutput like the following:
// Directory 'c:\windows\':
// 32 files, 6,587,222 bytes
This example only calculate the files in current folder, so if you want to calculate all the files recursively, you can change the
String[] files = Directory.GetFiles(args[1]);
to
String[] files = Directory.GetFiles(args[1], "*", SearchOption.AllDirectories);
The fastest way that I came up is using EnumerateFiles with SearchOption.AllDirectories. This method also allows updating the UI while going through the files and counting the size. Long path names don't cause any problems since FileInfo or DirectoryInfo are not tried to be created for the long path name. While enumerating files even though the filename is long the FileInfo returned by the EnumerateFiles don't cause problems as long as the starting directory name is not too long. There is still a problem with UnauthorizedAccess.
private void DirectoryCountEnumTest(string sourceDirName)
{
// Get the subdirectories for the specified directory.
long dataSize = 0;
long fileCount = 0;
string prevText = richTextBox1.Text;
if (Directory.Exists(sourceDirName))
{
DirectoryInfo dir = new DirectoryInfo(sourceDirName);
foreach (FileInfo file in dir.EnumerateFiles("*", SearchOption.AllDirectories))
{
fileCount++;
try
{
dataSize += file.Length;
richTextBox1.Text = prevText + ("\nCounting size: " + dataSize.ToString());
}
catch (Exception e)
{
richTextBox1.AppendText("\n" + e.Message);
}
}
richTextBox1.AppendText("\n files:" + fileCount.ToString());
}
}
This .NET core command line app here calculates directory sizes for a given path:
https://github.com/garethrbrown/folder-size
The key method is this one which recursively inspects sub-directories to come up with a total size.
private static long DirectorySize(SortDirection sortDirection, DirectoryInfo directoryInfo, DirectoryData directoryData)
{
long directorySizeBytes = 0;
// Add file sizes for current directory
FileInfo[] fileInfos = directoryInfo.GetFiles();
foreach (FileInfo fileInfo in fileInfos)
{
directorySizeBytes += fileInfo.Length;
}
directoryData.Name = directoryInfo.Name;
directoryData.SizeBytes += directorySizeBytes;
// Recursively add subdirectory sizes
DirectoryInfo[] subDirectories = directoryInfo.GetDirectories();
foreach (DirectoryInfo di in subDirectories)
{
var subDirectoryData = new DirectoryData(sortDirection);
directoryData.DirectoryDatas.Add(subDirectoryData);
directorySizeBytes += DirectorySize(sortDirection, di, subDirectoryData);
}
directoryData.SizeBytes = directorySizeBytes;
return directorySizeBytes;
}
}
In this link https://learn.microsoft.com/en-us/office/vba/language/reference/user-interface-help/size-property-filesystemobject-object there is a description of how to get the folder size directly using Visual Basic, without having to get a list of files and loop over them to add up their lengths.
Sub ShowFolderSize(filespec)
Dim fs, f, s
Set fs = CreateObject("Scripting.FileSystemObject")
Set f = fs.GetFolder(filespec)
s = UCase(f.Name) & " uses " & f.size & " bytes."
MsgBox s, 0, "Folder Size Info"
End Sub
In a c# project, you can also add a reference to Microsoft Scripting and use the FileSystemObject.
Here is a routine for c# to use this method to output the sizes of all the folders in the given path. It recurses up to a specified level, examining subfolders whose size is larger than average in an attempt to find where storage use problems are caused.
using System;
using System.IO;
using System.Collections.Generic;
using System.Linq;
namespace ShowFolderSizes
{
public class ShowFolderSizesMain
{
double GBFactor = 1024.0 * 1024.0 * 1024.0;
Scripting.FileSystemObject fileSystemObject = new Scripting.FileSystemObject();
public static void Main(string[] args)
{
ShowFolderSizesMain instance = new ShowFolderSizesMain();
instance.Run(args);
}
void Run(string[] args)
{
if (args.Length != 2)
{
Console.WriteLine("Usage: ShowFolderSizes path levels");
return;
}
string path = args[0];
if (!Int32.TryParse(args[1], out int levels))
{
Console.WriteLine($"Can't interpret {args[1]} as an integer.");
return;
}
writeFolderSizes(path, levels);
//Console.WriteLine("Press any key to continue...");
//Console.ReadKey();
}
public void writeFolderSizes(string topPath, int levels)
{
List<string> folderNames;
try
{
folderNames = new List<string>(Directory.GetDirectories(topPath));
}
catch (System.UnauthorizedAccessException e)
{
Console.WriteLine($"Can't access {topPath}");
return;
}
if (folderNames.Count == 0)
{
return;
}
var dic = new Dictionary<string, long>();
double sum = 0.0;
foreach (string folderPath in folderNames)
{
Scripting.Folder folder = fileSystemObject.GetFolder(folderPath);
try
{
dynamic dsize = folder.Size;
long size = Convert.ToInt64(dsize);
dic.Add(folderPath, size);
sum += Convert.ToDouble(size);
}
catch (System.Security.SecurityException e)
{
Console.WriteLine($"Can't access {folderPath}");
dic.Remove(folderPath);
}
}
sum = sum / GBFactor;
double avg = (sum / folderNames.Count);
Console.WriteLine($"{topPath} {sum.ToString("0.000")} GB:");
var sortedResults = (
from KeyValuePair<string, long> kvp in dic
orderby kvp.Value descending
select kvp);
foreach (KeyValuePair<string, long> kvp in sortedResults)
{
double gb = Convert.ToDouble(kvp.Value) / GBFactor;
Console.WriteLine($"{gb.ToString("000.000")} GB {kvp.Key}");
}
Console.WriteLine();
if (levels > 0)
{
long cutoff = Convert.ToInt64(avg * GBFactor);
var foldersToRecurse = (
from KeyValuePair<string, long> kvp in dic
where kvp.Value >= cutoff
orderby kvp.Value descending
select kvp.Key);
int nextLevel = levels - 1;
foreach (string folderPath in foldersToRecurse)
{
writeFolderSizes(folderPath, nextLevel);
}
}
}
}
}
For it to be really useful, it often needs to be run as administrator, since trying to access folders like C:\Program files or C:\Users causes it to go into the "catch" parts with my normal user.
I try to change the sample (Alexandre Pepin and hao's Answer)
As is
private long GetDirectorySize(string dirPath)
{
if (Directory.Exists(dirPath) == false)
{
return 0;
}
DirectoryInfo dirInfo = new DirectoryInfo(dirPath);
long size = 0;
// Add file sizes.
FileInfo[] fis = dirInfo.GetFiles();
foreach (FileInfo fi in fis)
{
size += fi.Length;
}
// Add subdirectory sizes.
DirectoryInfo[] dis = dirInfo.GetDirectories();
foreach (DirectoryInfo di in dis)
{
size += GetDirectorySize(di.FullName);
}
return size;
}
To be
private long GetDirectorySize2(string dirPath)
{
if (Directory.Exists(dirPath) == false)
{
return 0;
}
DirectoryInfo dirInfo = new DirectoryInfo(dirPath);
long size = 0;
// Add file sizes.
IEnumerable<FileInfo> fis = dirInfo.EnumerateFiles("*.*", SearchOption.AllDirectories);
foreach (FileInfo fi in fis)
{
size += fi.Length;
}
return size;
}
finally you can check the result
// ---------------------------------------------
// size of directory
using System.IO;
string log1Path = #"D:\SampleDirPath1";
string log2Path = #"D:\SampleDirPath2";
string log1DirName = Path.GetDirectoryName(log1Path);
string log2DirName = Path.GetDirectoryName(log2Path);
long log1Size = GetDirectorySize(log1Path);
long log2Size = GetDirectorySize(log2Path);
long log1Size2 = GetDirectorySize2(log1Path);
long log2Size2 = GetDirectorySize2(log2Path);
Console.WriteLine($#"{log1DirName} Size: {SizeSuffix(log1Size)}, {SizeSuffix(log1Size2)}
{log2DirName} Size: {SizeSuffix(log2Size)}, {SizeSuffix(log2Size2)}");
and this is the SizeSuffix function
private static readonly string[] SizeSuffixes =
{ "bytes", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB" };
/// <summary>
/// Size Display
/// </summary>
/// <param name="value">bytes 數值</param>
/// <param name="decimalPlaces">小數位數</param>
/// <returns></returns>
public static string SizeSuffix(Int64 value, int decimalPlaces = 2)
{
if (decimalPlaces < 0) { throw new ArgumentOutOfRangeException("decimalPlaces"); }
if (value < 0) { return "-" + SizeSuffix(-value); }
if (value == 0) { return string.Format("{0:n" + decimalPlaces + "} bytes", 0); }
// mag is 0 for bytes, 1 for KB, 2, for MB, etc.
int mag = (int)Math.Log(value, 1024);
// 1L << (mag * 10) == 2 ^ (10 * mag)
// [i.e. the number of bytes in the unit corresponding to mag]
decimal adjustedSize = (decimal)value / (1L << (mag * 10));
// make adjustment when the value is large enough that
// it would round up to 1000 or more
if (Math.Round(adjustedSize, decimalPlaces) >= 1000)
{
mag += 1;
adjustedSize /= 1024;
}
return string.Format("{0:n" + decimalPlaces + "} {1}",
adjustedSize,
SizeSuffixes[mag]);
}
I know this not a .net solution but here it comes anyways. Maybe it comes handy for people that have windows 10 and want a faster solution. For example if you run this command con your command prompt or by pressing winKey + R:
bash -c "du -sh /mnt/c/Users/; sleep 5"
The sleep 5 is so you have time to see the results and the windows does not closes
In my computer that displays:
Note at the end how it shows 85G (85 Gigabytes). It is supper fast compared to doing it with .Net. If you want to see the size more accurately remove the h which stands for human readable.
So just do something like Processes.Start("bash",... arguments) That is not the exact code but you get the idea.
I am a beginner to programming. I wrote a code in C# to open a single file (that has 4 columns of data) and extract the fourth column into a list. Then did some basic work on the data to extract the mean, minimum and maximum values of the data set. Then, the results was written to dedicated files for the mean, minimum and maximum values.
Now I want to repeat the same tests but for a multiple sets of files - each with over 100,000 lines of data. I want to enable the program to read a multiple set of files in the same folder and then do the same calculations for each file and compile all the results for mean, minimum and maximum values into separate folders, as before.
The code for the single file is as follows;
private void button1_Click_1(object sender, EventArgs e)
{
string text = "";
DialogResult result = openFileDialog1.ShowDialog(); // Show the dialog.
// create a list to insert the data into
List<float> noise = new List<float>();
int count = 0;
float sum = 0;
float mean = 0;
float max = 0;
float min = 100;
TextWriter tw = new StreamWriter("c:/Users/a3708906/Documents/Filereader - 13062012/Filereader/date.txt");
if (result == DialogResult.OK) // Test result.
{
string file = openFileDialog1.FileName;
FileInfo src = new FileInfo(file);
TextReader reader = src.OpenText();
text = reader.ReadLine();
// while the text being read in from reader.Readline() is not null
while (text != null)
{
text = reader.ReadLine();
if (text != null)
{
string[] words = text.Split(',');
noise.Add(Convert.ToSingle(words[3]));
// write text to a file
tw.WriteLine(text);
//foreach (string word in words)
//{
// tw.WriteLine(word);
//}
}
}
}
tw.Close();
TextWriter tw1 = new StreamWriter("c:/Users/a3708906/Documents/Filereader - 13062012/Filereader/noise.txt");
foreach (float ns in noise)
{
tw1.WriteLine(Convert.ToString(ns));
count++;
sum += ns;
mean = sum/count;
float min1 = 0;
if (ns > max)
max = ns;
else if (ns < max)
min1 = ns;
if (min1 < min && min1 >0)
min = min1;
else
min = min;
}
tw1.Close();
TextWriter tw2 = new StreamWriter("c:/Users/a3708906/Documents/Filereader - 13062012/Filereader/summarymeans.txt");
tw2.WriteLine("Mean Noise");
tw2.WriteLine("==========");
tw2.WriteLine("mote_noise 2: {0}", Convert.ToString(mean));
tw2.Close();
TextWriter tw3 = new StreamWriter("c:/Users/a3708906/Documents/Filereader - 13062012/Filereader/summarymaximums.txt");
tw3.WriteLine("Maximum Noise");
tw3.WriteLine("=============");
tw3.WriteLine("mote_noise 2: {0}", Convert.ToString(max));
tw3.Close();
TextWriter tw4 = new StreamWriter("c:/Users/a3708906/Documents/Filereader - 13062012/Filereader/summaryminimums.txt");
tw4.WriteLine("Minimum Noise");
tw4.WriteLine("=============");
tw4.WriteLine("mote_noise 2: {0}", Convert.ToString(min));
tw4.Close();
}
I will be grateful if someone could help to translate this code for working with multiple files. Thank you.
Wrap your logic for processing a single file into a single Action or a void-returning function, then enumerate the files, switch them to ParallelEnumerable and call Parallel.ForAll
For example, if you made an Action or function named DoStuff(string filename) which will do the process for a single file, you can then call it with :
Directory.EnumerateFiles(dialog.SelectedPath).AsParallel().ForAll(doStuff);
Your current code will work if you simply use Directory.GetFiles() properly. The easiest way to do it would be to have three inputs; one to get the Directory, and a second to get the file extension (if wanted), and a checkbox to ask whether or not you want to recursively search the folders or not.
Then instead of
string file = openFileDialog1.FileName;
you would instead have something like
//ensure the default fileExtensionDropdown.SelectedValue is "*"
string[] filePaths;
if(chkRecursiveSearch.IsChecked == true)
filePaths = Directory.GetFiles(dlgFolderBrowser.SelectedPath, #"*"+ddlFileExtension.SelectedValue, SearchOption.AllDirectories);
else
filePaths = Directory.GetFiles(dlgFolderBrowser.SelectedPath, #"*"+ddlFileExtension.SelectedValue);
Then you can use:
for(string path in filePaths){ // do things }
to handle each file path the way you are right now.
Please note the code I've put here is definitely not as idiomatic and tidy as it could be, but since you said you were a beginner I decided to be a bit more clear. If requested I'll put up a more idiomatic take on things, though if we do that we should probably clean up your initial code a bit as well.
Here is my scenario:
On a form I have list of direcotories, Button and control to display multiline text.
In a loop I try to find all files in each directory and delete them.
When file is deleted i want to add text to multiline control.
My problem is that when text is added I can not do anything else. Form is blocked and if I try do do anytching it just stops responding.
Files are deleted using BackgroundWorker
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
//this is datatable with directories and other info
MainDataset.CZYSZCZENIEDataTable CZYSZCZENIE = e.Argument as MainDataset.CZYSZCZENIEDataTable;
CzyscPliki(CZYSZCZENIE, ReportProgress);
}
private void CzyscPliki(MainDataset.CZYSZCZENIEDataTable CZYSZCZENIE, ReportProgressDel del)
{
DirectoryInfo dir = null;
FileInfo[] files = null;
bool subfolder = false;
string katalog = "";
string maska = "";
string[] maski = null;
long total=0;
string dirS;
string fileS;
long fileLen;
//foreach directory to delete
foreach (DataRow r in CZYSZCZENIE.Rows)
{
//CanRead - check if row is not deleted or detached
//r["CZYSC"].AsBool() - check if directory should be cleared
if (r.CanRead() && r["CZYSC"].AsBool())
{
subfolder = r["PODKATALOGI"].AsBool();
katalog = r["KATALOG"].AsString().TrimEnd('\\');
maska = r["MASKA"].AsString();
if (maska.IsEmpty())
maska = "*";
maski = maska.Split(';');
dir = new DirectoryInfo(katalog);
if (dir.Exists)
{
foreach (string s in maski)
{
files = dir.GetFiles(s, (subfolder ? SearchOption.AllDirectories : SearchOption.TopDirectoryOnly));
dir.GetFiles();
foreach (FileInfo f in files)
{
dirS = f.Directory.FullName;
fileS = f.Name;
fileLen = f.Length;
try
{
f.Delete();
total += fileLen;
if (del != null)
//here is problem: del - delegate to report state
//when it is called it blocks form
del(dirS, fileS, fileLen, total);
}
catch (Exception ex)
{ }
}
}
}
}
}
}
//this is the delegate that appends text in multiline control
//memoEdit1 is the control
//ceReportProgress.Checked - check if report should be added
private void ReportProgress(string directory, string file, long size, long totalSize)
{
if (memoEdit1.InvokeRequired)
{
memoEdit1.BeginInvoke(new Action<string, string, long, long>(ReportProgress), directory, file, size, totalSize);
}
else
{
if (ceReportProgress.Checked)
{
if (file.IsEmpty())
memoEdit1.AppendText("\r\nCzyszczenie katalogu " + directory);
else
{
memoEdit1.AppendText(file);
if (size > 0)
{
if (size > 1048576)
{
decimal d = size / 1048576;
d = decimal.Round(d, 2);
memoEdit1.AppendText("\tWielkość : " + d.AsString() + " megabajtów", false);
}
else if (size > 1024)
{
decimal d = (decimal)size / (decimal)1024;
d = decimal.Round(d, 2);
memoEdit1.AppendText("\tWielkość : " + d.AsString() + " kilobajtów", false);
}
else
memoEdit1.AppendText("\tWielkość : " + size.AsString() + " bajtów", false);
}
if (totalSize > 0)
{
if (totalSize > 1073741824)
{
decimal d = (decimal)totalSize / (decimal)1073741824;
d = decimal.Round(d, 2);
memoEdit1.AppendText("Zwolniono dotychczas : " + d.AsString() + " gigabajtów");
}
else if (totalSize > 1048576)
{
decimal d = (decimal)totalSize / (decimal)1048576;
d = decimal.Round(d, 2);
memoEdit1.AppendText("Zwolniono dotychczas : " + d.AsString() + " megabajtów");
}
else if (totalSize > 1024)
{
decimal d = (decimal)totalSize / (decimal)1024;
d = decimal.Round(d, 2);
memoEdit1.AppendText("Zwolniono dotychczas : " + d.AsString() + " kilobajtów");
}
else
memoEdit1.AppendText("Zwolniono dotychczas : " + totalSize.AsString() + " bajtów");
}
}
//scroll to the end of control
memoEdit1.ScrollToEnd();
}
}
}
How can I improve this to make it not blocking the form?
You are calling ReportProgress too often. Do it more than about 1000 times per second and the UI thread gets flooded with requests that it cannot keep up with. It won't get around to doing its normal duties, which include painting the controls and responding to the mouse and keyboard. It looks frozen. This gets worse when the UI update code gets more expensive, updating text in a TextBox when there's already a lot of text in it can get quite slow.
The diagnostic is still seeing the UI frozen for a while after the BGW stops running, working on emptying the backlog in the invoke request queue, then suddenly jumping back alive when the queue is finally emptied.
You need to throttle the rate at which you call BeginInvoke(). It never makes more sense to call it any more frequently than once every 50 milliseconds, a human cannot perceive the difference beyond that. Collect the info in a List<> so you can BeginInvoke() a lot less frequently. That's still no complete guarantee if your worker can produce results faster than the UI thread could ever keep up with. In which case slowing down the worker would be a fix. Easy by using Invoke instead of BeginInvoke.
If this worker is running asynchronously, then you can have a form which responds to you.
Besides, problems:
You are running the loop in another function - it makes the operation non-reponsive.
You are not even checking if user wants to cancel (just a point i wanted to make) - Handle DoWorkEventArgs's Cancel property inside the foreach loop.
Move the function CzyscPliki's code in the backgroundWorker1_DoWork (it's anyway too tiny).
EDIT:
If you don't want to move the code into DoWork event handler, then better use Thread for more control. I'm not an expert on it but you will find plenty of code on how to implement so.