Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have this. It is an application for generating bank Accounts
static void Main(string[] args)
{
string path = #"G:\BankNumbers";
var bans = BankAcoutNumbers.BANS;
const int MAX_FILES = 80;
const int BANS_PER_FILE = 81818182/80;
int bansCounter = 0;
var part = new List<int>();
var maxNumberOfFiles = 10;
Stopwatch timer = new Stopwatch();
var fileCounter = 0;
if (!Directory.Exists(path))
{
DirectoryInfo di = Directory.CreateDirectory(path);
}
try
{
while (fileCounter <= maxNumberOfFiles)
{
timer.Start();
foreach (var bank in BankAcoutNumbers.BANS)
{
part.Add(bank);
if (++bansCounter >= BANS_PER_FILE)
{
string fileName = string.Format("{0}-{1}", part[0], part[part.Count - 1]);
string outputToFile = "";// Otherwise you dont see the lines in the file. Just single line!!
Console.WriteLine("NR{0}", fileName);
string subString = System.IO.Path.Combine(path, "BankNumbers");//Needed to add, because otherwise the files will not stored in the correct folder!!
fileName = subString + fileName;
foreach (var partBan in part)
{
Console.WriteLine(partBan);
outputToFile += partBan + Environment.NewLine;//Writing the lines to the file
}
System.IO.File.WriteAllText(fileName, outputToFile);//Writes to file system.
part.Clear();
bansCounter = 0;
//System.IO.File.WriteAllText(fileName, part.ToString());
if (++fileCounter >= MAX_FILES)
break;
}
}
}
timer.Stop();
Console.WriteLine(timer.Elapsed.Seconds);
}
catch (Exception)
{
throw;
}
System.Console.WriteLine("Press any key to exit.");
System.Console.ReadKey();
}
But this generates 81 million bank account records seperated over 80 files. But can I speed up the process with threading?
You're talking about speeding up a process whose bottleneck is overwhelmingly likely the file write speed. You can't really effectively parallelize writing to a single disk.
You may see slight increases in speed if you spawn a worker thread responsible for just fileIO. In other words, create a buffer, have your main thread dump contents into it while the other thread writes it to disk. It's the classic producer/consumer dynamic. I wouldn't expect serious speed gains, however.
Also keep in mind that writing to the console will slow you down, but you can keep that in the main thread and you'll probably be fine. Just make sure you put a limit on the buffer size and have the producer thread hang back when the buffer is full.
Edit: Also have a look at the link L-Three provided, using a BufferedStream would be an improvement (and probably render a consumer thread unnecessary)
Your process can be divided into two steps:
Generate an account
Save the account in the file
First step can be done in parallel as there is no dependency between accounts. That is wile creating an account number xyz you don't have to rely on data from the account xyz - 1 (as it may not yet be created).
The problematic bit is writing the data into file. You don't want several threads trying to access and write to the same file. And adding locks will likely make your code a nightmare to maintain. Other issue is that it's the writing to the file that slows the whole process down.
At the moment, in your code creating account and writing to the file happens in one process.
What you can try is to separate these processes. So First you create all the accounts and keep them in some collection. Here multi-threading can be used safely. Only when all the accounts are created you save them.
Improving the saving process will take bit more work. You will have to divide all the accounts into 8 separate collections. For each collection you create a separate file. Then you can take first collection, first file, and create a thread that will write the data to the file. The same for second collection and second file. And so on. These 8 processes can run in parallel and you do not have to worry that more than one thread will try to access same file.
Below some pseudo-code to illustrate the idea:
public void CreateAndSaveAccounts()
{
List<Account> accounts = this.CreateAccounts();
// Divide the accounts into separate batches
// Of course the process can (and shoudl) be automated.
List<List<Account>> accountsInSeparateBatches =
new List<List<Account>>
{
accounts.GetRange(0, 10000000), // Fist batch of 10 million
accounts.GetRange(10000000, 10000000), // Second batch of 10 million
accounts.GetRange(20000000, 10000000) // Third batch of 10 million
// ...
};
// Save accounts in parallel
Parallel.For(0, accountsInSeparateBatches.Count,
i =>
{
string filePath = string.Format(#"C:\file{0}", i);
this.SaveAccounts(accountsInSeparateBatches[i], filePath);
}
);
}
public List<Account> CreateAccounts()
{
// Create accounts here
// and return them as a collection.
// Use parallel processing wherever possible
}
public void SaveAccounts(List<Account> accounts, string filePath)
{
// Save accounts to file
// The method creates a thread to do the work.
}
Related
I am working on a program that scans drop folders for files, and registers them to another system that requires a duration for the file. The best solution I've been able to find so far is to use MediaInfo to get the duration from the header, but for some reason it tends to take a few seconds to return a result.
Suppose I have a list of 1,000 file paths, and I want to get the duration for each one, but getting the duration takes 15 seconds. Linear iteration over the list would take just over 4 hours, and even running 8 tasks in parallel would take half an hour. With my tests, this would be the best case scenario.
I've tried using the MediaInfo DLL as well as calling the .exe, and both seemed to have similar processing times.
DLL Code:
MediaInfo MI;
public Form1()
{
InitializeComponent();
MI = new MediaInfo();
}
private void button1_Click(object sender, EventArgs e)
{
MI.Open(textBox1.Text);
MI.Option("Inform", "Video;%Duration%");
label2.Text = MI.Inform();
MI.Close();
}
Executable code:
Process proc = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "MediaInfo.exe",
Arguments = $"--Output=Video;%Duration% \"{textBox1.Text}\"",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
StringBuilder line = new StringBuilder();
proc.Start();
while (!proc.StandardOutput.EndOfStream)
{
line.Append(proc.StandardOutput.ReadLine());
}
label2.Text = line.ToString();
It should be noted that the files being processed are on a networked drive, but I have tested retrieving the duration of a local file and it was only a few seconds faster.
Note, this program has to run on Windows Server 2003 R2, which means .net 4.0 only. Most of the files I will be processing are .mov but I can't restrict it to that.
Some better code (prefer DLL call, init takes time) with options for reducing the scan duration:
MediaInfo MI;
public Form1()
{
InitializeComponent();
MI = new MediaInfo();
MI.Option("ParseSpeed", "0"); // Advanced information (e.g. GOP size, captions detection) not needed, request to scan as fast as possible
MI.Option("ReadByHuman", "0"); // Human readable strings are not needed, no noeed to spend time on them
}
private void button1_Click(object sender, EventArgs e)
{
MI.Open(textBox1.Text);
label2.Text = MI.Get(Stream_Video, "Duration"); //Note: prefer Stream_General if you want the duration of the program (here, you select the duration of the video stream)
MI.Close();
}
There are several possibilities for improving parsing time depending of your specific needs (i.e. you don't care of lot of features) but this is code to add directly to MediaInfo (e.g. for MP4/QuickTime files, getting only the duration could take less than 200 ms if I disable other features), add a feature request if you need speed.
Jérôme, developer of MediaInfo
It's code that will execute 4 threads in 15-min intervals. The last time that I ran it, the first 15-minutes were copied fast (20 files in 6 minutes), but the 2nd 15-minutes are much slower. It's something sporadic and I want to make certain that, if there's any bottleneck, it's in a bandwidth limitation with the remote server.
EDIT: I'm monitoring the last run and the 15:00 and :45 copied in under 8 minutes each. The :15 hasn't finished and neither has :30, and both began at least 10 minutes before :45.
Here's my code:
static void Main(string[] args)
{
Timer t0 = new Timer((s) =>
{
Class myClass0 = new Class();
myClass0.DownloadFilesByPeriod(taskRunDateTime, 0, cts0.Token);
Copy0Done.Set();
}, null, TimeSpan.FromMinutes(20), TimeSpan.FromMilliseconds(-1));
Timer t1 = new Timer((s) =>
{
Class myClass1 = new Class();
myClass1.DownloadFilesByPeriod(taskRunDateTime, 1, cts1.Token);
Copy1Done.Set();
}, null, TimeSpan.FromMinutes(35), TimeSpan.FromMilliseconds(-1));
Timer t2 = new Timer((s) =>
{
Class myClass2 = new Class();
myClass2.DownloadFilesByPeriod(taskRunDateTime, 2, cts2.Token);
Copy2Done.Set();
}, null, TimeSpan.FromMinutes(50), TimeSpan.FromMilliseconds(-1));
Timer t3 = new Timer((s) =>
{
Class myClass3 = new Class();
myClass3.DownloadFilesByPeriod(taskRunDateTime, 3, cts3.Token);
Copy3Done.Set();
}, null, TimeSpan.FromMinutes(65), TimeSpan.FromMilliseconds(-1));
}
public struct FilesStruct
{
public string RemoteFilePath;
public string LocalFilePath;
}
Private void DownloadFilesByPeriod(DateTime TaskRunDateTime, int Period, Object obj)
{
FilesStruct[] Array = GetAllFiles(TaskRunDateTime, Period);
//Array has 20 files for the specific period.
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
TransferOperationResult transferResult;
foreach (FilesStruct u in Array)
{
if (session.FileExists(u.RemoteFilePath)) //File exists remotely
{
if (!File.Exists(u.LocalFilePath)) //File does not exist locally
{
transferResult = session.GetFiles(u.RemoteFilePath, u.LocalFilePath);
transferResult.Check();
foreach (TransferEventArgs transfer in transferResult.Transfers)
{
//Log that File has been transferred
}
}
else
{
using (StreamWriter w = File.AppendText(Logger._LogName))
{
//Log that File exists locally
}
}
}
else
{
using (StreamWriter w = File.AppendText(Logger._LogName))
{
//Log that File exists remotely
}
}
if (token.IsCancellationRequested)
{
break;
}
}
}
}
Something is not quite right here. First thing is, you're setting 4 timers to run parallel. If you think about it, there is no need. You don't need 4 threads running parallel all the time. You just need to initiate tasks at specific intervals. So how many timers do you need? ONE.
The second problem is why TimeSpan.FromMilliseconds(-1)? What is the purpose of that? I can't figure out why you put that in there, but I wouldn't.
The third problem, not related to multi-programming, but I should point out anyway, is that you create a new instance of Class each time, which is unnecessary. It would be necessary if, in your class, you need to set constructors and your logic access different methods or fields of the class in some order. In your case, all you want to do is to call the method. So you don't need a new instance of the class every time. You just need to make the method you're calling static.
Here is what I would do:
Store the files you need to download in an array / List<>. Can't you spot out that you're doing the same thing every time? Why write 4 different versions of code for that? This is unnecessary. Store items in an array, then just change the index in the call!
Setup the timer at perhaps 5 seconds interval. When it reaches the 20 min/ 35 min/ etc. mark, spawn a new thread to do the task. That way a new task can start even if the previous one is not finished.
Wait for all threads to complete (terminate). When they do, check if they throw exceptions, and handle them / log them if necessary.
After everything is done, terminate the program.
For step 2, you have the option to use the new async keyword if you're using .NET 4.5. But it won't make a noticeable difference if you use threads manually.
And why is it so slow...why don't you check your system status using task manager? Is the CPU high and running or is the network throughput occupied by something else or what? You can easily tell the answer yourself from there.
The problem was the sftp client.
The purpose of the console application was to loop through a list<> and download the files. I tried with winscp and, even though, it did the job, it was very slow. I also tested sharpSSH and it was even slower than winscp.
I finally ended up using ssh.net which, at least in my particular case, was much faster than both winscp and sharpssh. I think the problem with winscp is that there was no evident way of disconnecting after I was done. With ssh.net I could connect/disconnect after every file download was made, something I couldn't do with winscp.
I am working on a logging system for a web application which logs a sequence of events in a dictionary object before sending it to my logging object using Task.Factory.StartNew(() => iLogEventSave()). The logger seemed to work fine, but in some instances some events were not being saved properly so I used the lock() statement to correct the issue. This seemed to do the trick, but the application's performance has dramatically decreased by doing this. How can I have the UI/Page render without having to wait for the Tasks to finish their job?
Below is the code
private static readonly object Locker = new object();
public void iLogEventSave(object state)
{
XmlDocument doc = new XmlDocument();
IDictionary<string, string> EventDetails = (IDictionary<string, string>)state;
string logFile = "";
if(ConfigurationManager.AppSettings["Log_File_Path"].ToString() =="")
{
logFile = HttpRuntime.AppDomainAppPath + "Logs\\" + DateTime.Now.ToString("yyyy_MM_dd") + ".txt";
}
else
{
logFile = ConfigurationManager.AppSettings["Log_File_Path"].ToString() + DateTime.Now.ToString("yyyy_MM_dd") + ".txt";
}
lock (Locker)
{
if (File.Exists(logFile))
{
doc.Load(logFile);
}
else
{
var root = doc.CreateElement("Log");
doc.AppendChild(root);
}
var el = (XmlElement)doc.DocumentElement.AppendChild(doc.CreateElement("Event"));
foreach (KeyValuePair<string, string> item in EventDetails)
{
XmlElement Desc = doc.CreateElement("Details");
Desc.SetAttribute(item.Key.ToString(), item.Value);
el.AppendChild(Desc);
}
doc.Save(logFile);
}
}
If your log did not save several events while being executed asynchronously, you have an unhandled error that you did not address. Considering that you're using a file, I'm going to go out on a limb and say that it failed because two threads were competing for access to the same log file and the first thread to grab it locked the other one out. This is why your lock would now work, it prevents other threads from trying to grab the file.
But logging to a file means that you've effectively restricted yourself to one thread at a time and dealing with the entire file as it grows. You have to load more and more, append more and more, and locking the thread means that the more threads are waiting to log the events, the higher your overhead. All this could certainly add up to a decrease in performance.
May I recommend using a database table to log events? File I/O is very expensive, resource and time-wise. Databases have less overhead and far better throughput by comparison in these very scenarios.
I am writing a WPF application in c# and I need to move some files--the rub is that I really REALLY need to know if the files make it. To do this, I wrote a check that makes sure that the file gets to the target directory after the move--the problem is that sometimes I get to the check before the file finishes moving:
System.IO.File.Move(file.FullName, endLocationWithFile);
System.IO.FileInfo[] filesInDirectory = endLocation.GetFiles();
foreach (System.IO.FileInfo temp in filesInDirectory)
{
if (temp.Name == shortFileName)
{
return true;
}
}
// The file we sent over has not gotten to the correct directory....something went wrong!
throw new IOException("File did not reach destination");
}
catch (Exception e)
{
//Something went wrong, return a fail;
logger.writeErrorLog(e);
return false;
}
Could somebody tell me how to make sure that the file actually gets to the destination?--The files that I will be moving could be VERY large--(Full HD mp4 files of up to 2 hours)
Thanks!
You could use streams with Aysnc Await to ensure the file is completely copied
Something like this should work:
private void Button_Click(object sender, RoutedEventArgs e)
{
string sourceFile = #"\\HOMESERVER\Development Backup\Software\Microsoft\en_expression_studio_4_premium_x86_dvd_537029.iso";
string destinationFile = "G:\\en_expression_studio_4_premium_x86_dvd_537029.iso";
MoveFile(sourceFile, destinationFile);
}
private async void MoveFile(string sourceFile, string destinationFile)
{
try
{
using (FileStream sourceStream = File.Open(sourceFile, FileMode.Open))
{
using (FileStream destinationStream = File.Create(destinationFile))
{
await sourceStream.CopyToAsync(destinationStream);
if (MessageBox.Show("I made it in one piece :), would you like to delete me from the original file?", "Done", MessageBoxButton.YesNo) == MessageBoxResult.Yes)
{
sourceStream.Close();
File.Delete(sourceFile);
}
}
}
}
catch (IOException ioex)
{
MessageBox.Show("An IOException occured during move, " + ioex.Message);
}
catch (Exception ex)
{
MessageBox.Show("An Exception occured during move, " + ex.Message);
}
}
If your using VS2010 you will have to install Async CTP to use the new Async/Await syntax
You could watch for the files to disappear from the original directory, and then confirm that they indeed appeared in the target directory.
I have not had great experience with file watchers. I would probably have the thread doing the move wait for an AutoResetEvent while a separate thread or timer runs to periodically check for the files to disappear from the original location, check that they are in the new location, and perhaps (depending on your environment and needs) perform a consistency check (e.g. MD5 check) of the files. Once those conditions are satisfied, the "checker" thread/timer would trigger the AutoResetEvent so that the original thread can progress.
Include some "this is taking way too long" logic in the "checker".
Why not manage the copy yourself by copying streams?
//http://www.dotnetthoughts.net/writing_file_with_non_cache_mode_in_c/
const FileOptions FILE_FLAG_NO_BUFFERING = (FileOptions) 0x20000000;
//experiment with different buffer sizes for optimal speed
var bufLength = 4096;
using(var outFile =
new FileStream(
destPath,
FileMode.Create,
FileAccess.Write,
FileShare.None,
bufLength,
FileOptions.WriteThrough | FILE_FLAG_NO_BUFFERING))
using(var inFile = File.OpenRead(srcPath))
{
//either
//inFile.CopyTo(outFile);
//or
var fileSizeInBytes = inFile.Length;
var buf = new byte[bufLength];
long totalCopied = 0L;
int amtRead;
while((amtRead = inFile.Read(buf,0,bufLength)) > 0)
{
outFile.Write(buf,0,amtRead);
totalCopied += amtRead;
double progressPct =
Convert.ToDouble(totalCopied) * 100d / fileSizeInBytes;
progressPct.Dump();
}
}
//file is written
You most likely want the move to happen in a separate thread so that you aren't stopping the execution of your application for hours.
If the program cannot continue without the move being completed, then you could open a dialog and check in on the move thread periodically to update a progress tracker. This provides the user with feedback and will prevent them from feeling as if the program has frozen.
There's info and an example on this here:
http://hintdesk.com/c-wpf-copy-files-with-progress-bar-by-copyfileex-api/
try checking periodically in a background task whether the copied file
size reached the file size of the original file (you can add hashes comparing between the files)
Got similar problem recently.
OnBackupStarts();
//.. do stuff
new TaskFactory().StartNew(() =>
{
OnBackupStarts()
//.. do stuff
OnBackupEnds();
});
void OnBackupEnds()
{
if (BackupChanged != null)
{
BackupChanged(this, new BackupChangedEventArgs(BackupState.Done));
}
}
do not wait, react to event
In first place, consider that Moving files in an operating system does not “recreates” the file in the new directory, but only changes its location data in the “files allocation table”, as physically copy all bytes to delete old ones is just a waste of time.
Due to that reason, moving files is a very fast process, no matter the file size.
EDIT: As Mike Christiansen states in his comment, this "speedy" process only happens when files are moving inside the same volume (you know, C:\... to C:\...)
Thus, copy/delete behavior as proposed by “sa_ddam213” in his response will work but is not the optimal solution (takes longer to finish, will not work if for example you don’t have enough free disk to make the copy of the file while the old one exists, …).
MSDN documentation about File.Move(source,destination) method does not specifies if it waits for completion, but the code given as example makes a simple File.Exists(…) check, saying that having there the original file “is unexpected”:
// Move the file.
File.Move(path, path2);
Console.WriteLine("{0} was moved to {1}.", path, path2);
// See if the original exists now.
if (File.Exists(path))
{
Console.WriteLine("The original file still exists, which is unexpected.");
}
else
{
Console.WriteLine("The original file no longer exists, which is expected.");
}
Perhaps, you could use a similar approach to this one, checking in a while loop for the existence of the new file, and the non existence of the old one, giving a “timer” exit for the loop just in case something unexpected happens at operating system level, and the files get lost:
// We perform the movement of the file
File.Move(source,destination);
// Sets an "exit" datetime, after wich the loop will end, for example 15 seconds. The moving process should always be quicker than that if files are in the same volume, almost immediate, but not if they are in different ones
DateTime exitDateTime = DateTime.Now.AddSeconds(15);
bool exitLoopByExpiration = false;
// We stops here until copy is finished (by checking fies existence) or the time limit excedes
while (File.Exists(source) && !File.Exists(destination) && !exitLoopByExpiration ) {
// We compare current datetime with the exit one, to see if we reach the exit time. If so, we set the flag to exit the loop by expiration time, not file moving
if (DateTime.Now.CompareTo(exitDateTime) > 0) { exitLoopByExpiration = true; }
}
//
if (exitLoopByExpiration) {
// We can perform extra work here, like log problems or throw exception, if the loop exists becouse of time expiration
}
I have checked this solution and seems to work without problems.
I run through millions of records and sometimes I have to debug using Console.WriteLine to see what is going on.
However, Console.WriteLine is very slow, considerably slower than writing to a file.
BUT it is very convenient - does anyone know of a way to speed it up?
If it is just for debugging purposes you should use Debug.WriteLine instead. This will most likely be a bit faster than using Console.WriteLine.
Example
Debug.WriteLine("There was an error processing the data.");
You can use the OutputDebugString API function to send a string to the debugger. It doesn't wait for anything to redraw and this is probably the fastest thing you can get without digging into the low-level stuff too much.
The text you give to this function will go into Visual Studio Output window.
[DllImport("kernel32.dll")]
static extern void OutputDebugString(string lpOutputString);
Then you just call OutputDebugString("Hello world!");
Do something like this:
public static class QueuedConsole
{
private static StringBuilder _sb = new StringBuilder();
private static int _lineCount;
public void WriteLine(string message)
{
_sb.AppendLine(message);
++_lineCount;
if (_lineCount >= 10)
WriteAll();
}
public void WriteAll()
{
Console.WriteLine(_sb.ToString());
_lineCount = 0;
_sb.Clear();
}
}
QueuedConsole.WriteLine("This message will not be written directly, but with nine other entries to increase performance.");
//after your operations, end with write all to get the last lines.
QueuedConsole.WriteAll();
Here is another example: Does Console.WriteLine block?
I recently did a benchmark battery for this on .NET 4.8. The tests included many of the proposals mentioned on this page, including Async and blocking variants of both BCL and custom code, and then most of those both with and without dedicated threading, and finally scaled across power-of-2 buffer sizes.
The fastest method, now used in my own projects, buffers 64K of wide (Unicode) characters at a time from .NET directly to the Win32 function WriteConsoleW without copying or even hard-pinning. Remainders larger than 64K, after filling and flushing one buffer, are also sent directly, and in-situ as well. The approach deliberately bypasses the Stream/TextWriter paradigm so it can (obviously enough) provide .NET text that is already Unicode to a (native) Unicode API without all the superfluous memory copying/shuffling and byte[] array allocations required for first "decoding" to a byte stream.
If there is interest (perhaps because the buffering logic is slightly intricate), I can provide the source for the above; it's only about 80 lines. However, my tests determined that there's a simpler way to get nearly the same performance, and since it doesn't require any Win32 calls, I'll show this latter technique instead.
The following is way faster than Console.Write:
public static class FastConsole
{
static readonly BufferedStream str;
static FastConsole()
{
Console.OutputEncoding = Encoding.Unicode; // crucial
// avoid special "ShadowBuffer" for hard-coded size 0x14000 in 'BufferedStream'
str = new BufferedStream(Console.OpenStandardOutput(), 0x15000);
}
public static void WriteLine(String s) => Write(s + "\r\n");
public static void Write(String s)
{
// avoid endless 'GetByteCount' dithering in 'Encoding.Unicode.GetBytes(s)'
var rgb = new byte[s.Length << 1];
Encoding.Unicode.GetBytes(s, 0, s.Length, rgb, 0);
lock (str) // (optional, can omit if appropriate)
str.Write(rgb, 0, rgb.Length);
}
public static void Flush() { lock (str) str.Flush(); }
};
Note that this is a buffered writer, so you must call Flush() when you have no more text to write.
I should also mention that, as shown, technically this code assumes 16-bit Unicode (UCS-2, as opposed to UTF-16) and thus won't properly handle 4-byte escape surrogates for characters beyond the Basic Multilingual Plane. The point hardly seems important given the more extreme limitations on console text display in general, but could perhaps still matter for piping/redirection.
Usage:
FastConsole.WriteLine("hello world.");
// etc...
FastConsole.Flush();
On my machine, this gets about 77,000 lines/second (mixed-length) versus only 5,200 lines/sec under identical conditions for normal Console.WriteLine. That's a factor of almost 15x speedup.
These are controlled comparison results only; note that absolute measurements of console output performance are highly variable, depending on the console window settings and runtime conditions, including size, layout, fonts, DWM clipping, etc.
Why Console is slow:
Console output is actually an IO stream that's managed by your operating system. Most IO classes (like FileStream) have async methods but the Console class was never updated so it always blocks the thread when writing.
Console.WriteLine is backed by SyncTextWriter which uses a global lock to prevent multiple threads from writing partial lines. This is a major bottleneck that forces all threads to wait for each other to finish the write.
If the console window is visible on screen then there can be significant slowdown because the window needs to be redrawn before the console output is considered flushed.
Solutions:
Wrap the Console stream with a StreamWriter and then use async methods:
var sw = new StreamWriter(Console.OpenStandardOutput());
await sw.WriteLineAsync("...");
You can also set a larger buffer if you need to use sync methods. The call will occasionally block when the buffer gets full and is flushed to the stream.
// set a buffer size
var sw = new StreamWriter(Console.OpenStandardOutput(), Encoding.UTF8, 8192);
// this write call will block when buffer is full
sw.Write("...")
If you want the fastest writes though, you'll need to make your own buffer class that writes to memory and flushes to the console asynchronously in the background using a single thread without locking. The new Channel<T> class in .NET Core 2.1 makes this simple and fast. Plenty of other questions showing that code but comment if you need tips.
A little old thread and maybe not exactly what the OP is looking for, but I ran into the same question recently, when processing audio data in real time.
I compared Console.WriteLine to Debug.WriteLine with this code and used DebugView as a dos box alternative. It's only an executable (nothing to install) and can be customized in very neat ways (filters & colors!). It has no problems with tens of thousands of lines and manages the memory quite well (I could not find any kind of leak, even after days of logging).
After doing some testing in different environments (e.g.: virtual machine, IDE, background processes running, etc) I made the following observations:
Debug is almost always faster
For small bursts of lines (<1000), it's about 10 times faster
For larger chunks it seems to converge to about 3x
If the Debug output goes to the IDE, Console is faster :-)
If DebugView is not running, Debug gets even faster
For really large amounts of consecutive outputs (>10000), Debug gets slower and Console stays constant. I presume this is due to the memory, Debug has to allocate and Console does not.
Obviously, it makes a difference if DebugView is actually "in-view" or not, as the many gui updates have a significant impact on the overall performance of the system, while Console simply hangs, if visible or not. But it's hard to put numbers on that one...
I did not try multiple threads writing to the Console, as I think this should generally avoided. I never had (performance) problems when writing to Debug from multiple threads.
If you compile with Release settings, usually all Debug statements are omitted and Trace should produce the same behaviour as Debug.
I used VS2017 & .Net 4.6.1
Sorry for so much code, but I had to tweak it quite a lot to actually measure what I wanted to. If you can spot any problems with the code (biases, etc.), please comment. I would love to get more precise data for real life systems.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;
namespace Console_vs_Debug {
class Program {
class Trial {
public string name;
public Action console;
public Action debug;
public List < float > consoleMeasuredTimes = new List < float > ();
public List < float > debugMeasuredTimes = new List < float > ();
}
static Stopwatch sw = new Stopwatch();
private static int repeatLoop = 1000;
private static int iterations = 2;
private static int dummy = 0;
static void Main(string[] args) {
if (args.Length == 2) {
repeatLoop = int.Parse(args[0]);
iterations = int.Parse(args[1]);
}
// do some dummy work
for (int i = 0; i < 100; i++) {
Console.WriteLine("-");
Debug.WriteLine("-");
}
for (int i = 0; i < iterations; i++) {
foreach(Trial trial in trials) {
Thread.Sleep(50);
sw.Restart();
for (int r = 0; r < repeatLoop; r++)
trial.console();
sw.Stop();
trial.consoleMeasuredTimes.Add(sw.ElapsedMilliseconds);
Thread.Sleep(1);
sw.Restart();
for (int r = 0; r < repeatLoop; r++)
trial.debug();
sw.Stop();
trial.debugMeasuredTimes.Add(sw.ElapsedMilliseconds);
}
}
Console.WriteLine("---\r\n");
foreach(Trial trial in trials) {
var consoleAverage = trial.consoleMeasuredTimes.Average();
var debugAverage = trial.debugMeasuredTimes.Average();
Console.WriteLine(trial.name);
Console.WriteLine($ " console: {consoleAverage,11:F4}");
Console.WriteLine($ " debug: {debugAverage,11:F4}");
Console.WriteLine($ "{consoleAverage / debugAverage,32:F2} (console/debug)");
Console.WriteLine();
}
Console.WriteLine("all measurements are in milliseconds");
Console.WriteLine("anykey");
Console.ReadKey();
}
private static List < Trial > trials = new List < Trial > {
new Trial {
name = "constant",
console = delegate {
Console.WriteLine("A static and constant string");
},
debug = delegate {
Debug.WriteLine("A static and constant string");
}
},
new Trial {
name = "dynamic",
console = delegate {
Console.WriteLine("A dynamically built string (number " + dummy++ + ")");
},
debug = delegate {
Debug.WriteLine("A dynamically built string (number " + dummy++ + ")");
}
},
new Trial {
name = "interpolated",
console = delegate {
Console.WriteLine($ "An interpolated string (number {dummy++,6})");
},
debug = delegate {
Debug.WriteLine($ "An interpolated string (number {dummy++,6})");
}
}
};
}
}
Just a little trick I use sometimes: If you remove focus from the Console window by opening another window over it, and leave it until it completes, it won't redraw the window until you refocus, speeding it up significantly. Just make sure you have the buffer set up high enough that you can scroll back through all of the output.
Try using the System.Diagnostics Debug class? You can accomplish the same things as using Console.WriteLine.
You can view the available class methods here.