I'm running into an issue wherein I am getting significant (10+ second) delays when performing file write operations. It seems only to happen once, and always happens during the 2nd (or sometimes 3rd?) call to the WriteToFile() function.
I've written out 3 different 'WriteToFile' functions to show some of the variations I've tried thus far + shown additional lines in 'OpenFileIfNecessary' that I've tried.
The code never throws an error, and the offsets/counts are all valid. Once the delays occur a single time, there seem to be no further delays.
This has been a pain in my side for 2+ days and I'm definitely at that point where I'm in need of a 2nd opinion.
private void WriteToFile(byte[] data, long offset, int count)
{
lock (this.monitor)
{
this.OpenFileIfNecessary();
this.fileStream.Seek(offset, SeekOrigin.Begin); // <- Takes 10+ seconds for THIS line to execute
this.fileStream.Write(data, 0, count);
}
}
private void WriteToFile2(byte[] data, long offset, int count)
{
lock (this.monitor)
{
this.OpenFileIfNecessary();
this.fileStream.Position = offset; // <- Takes 10+ seconds for THIS line to execute
this.fileStream.Write(data, 0, count);
}
}
private void WriteToFile3(byte[] data, long offset, int count)
{
lock (this.monitor)
{
var fileName = this.file.FullName;
using (Stream fileStream = new FileStream(fileName, FileMode.OpenOrCreate))
{
fileStream.Position = offset; //(instant execution of this line)
fileStream.Write(data, 0, count);
//Getting from HERE ->
}
//To HERE <- takes 10+ seconds
}
}
private System.IO.FileStream fileStream = null;
private System.IO.FileInfo file; //value set during construction
private void OpenFileIfNecessary()
{
lock (this.monitor) {
if (this.fileStream == null) {
//The following 3 lines all result in the same behavior described in this post
//this.fileStream = this.file.Open(FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite);
//this.fileStream = this.file.Open(FileMode.OpenOrCreate, FileAccess.Write, FileShare.Write);
//this.fileStream = this.file.OpenWrite();
this.fileStream = this.file.Open(FileMode.OpenOrCreate);
}
}
}
Found the issue. It's worth mentioning that we had previously been testing with smaller (<1GB files) until late last week. With that in mind:
We write to the file at different positions, that is, we don't simply start at position 0 and go to the end. What that means (especially for larger files) is that every time we first go to a position that is deep into the file, there is apparently a wait period for the newly extended size to be allocated.
The way FileStream obfuscates a lot of the under-the-hood stuff made it a little difficult to find the pattern, and once we did some deeper profiling and discovered smaller delays with smaller files (never noticed the delays before) it became clear what was happening.
The plan forward is to do some multithreading to allow for the space for the file to be allocated fully before writing to disk; we can buffer in memory during that wait period.
Example code for preallocating the entire file:
fileStream.Seek(size - 1, SeekOrigin.Begin);
fileStream.WriteByte(0);
fileStream.Flush();
That is happening because when you set a file position to some large value, underlying storage system has to zero out the contents of allocated blocks. I do not believe BCL will let you bypass that but there is actual a way in Win32 to skip that functionality which requires running program to have administrator privileges (in a very imprecise manner).
Search for SetFileValidData() documentation.
Related
I've recently implemented a small program which reads data coming from a sensor and plotting it as diagram.
The data comes in as chunks of 5 bytes, roughly every 500 µs (baudrate: 500000). Around 3000 chunks make up a complete line. So the total transmission time is around 1.5 s.
As I was looking at the live diagram I noticed a severe lag between what is shown and what is currently measured. Investigating, it all boiled down to:
SerialPort.ReadLine();
It takes around 0.5 s more than the line to be transmitted. So each line read takes around 2 s. Interestingly no data is lost, it just lags behind even more with each new line read. This is very irritating for the user, so I couldn't leave it like that.
I've implemented my own variant and it shows a consistent time of around 1.5 s, and no lag occurs. I'm not really proud of my implementation (more or less polling the BaseStream) and I'm wondering if there is a way to speed up the ReadLine function of the SerialPort class. With my implementation I'm also getting some corrupted lines, and haven't found the exact issue yet.
I've tried changing the ReadTimeout to 1600, but that just produced a TimeoutException. Although the data arrived.
Any explanation as of why it is slow or a way to fix it is appreciated.
As a side-note: I've tried this on a Console application with only SerialPort.ReadLine() as well and the result is the same, so I'm ruling out my own application affecting the SerialPort.
I'm not sure this is relevant, but my implementation looks like this:
LineSplitter lineSplitter = new LineSplitter();
async Task<string> SerialReadLineAsync(SerialPort serialPort)
{
byte[] buffer = new byte[5];
string ret = string.Empty;
while (true)
{
try
{
int bytesRead = await serialPort.BaseStream.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false);
byte[] line = lineSplitter.OnIncomingBinaryBlock(this, buffer, bytesRead);
if (null != line)
{
return Encoding.ASCII.GetString(line).TrimEnd('\r', '\n');
}
}
catch
{
return string.Empty;
}
}
}
With LineSplitter being the following:
class LineSplitter
{
// based on: http://www.sparxeng.com/blog/software/reading-lines-serial-port
public byte Delimiter = (byte)'\n';
byte[] leftover;
public byte[] OnIncomingBinaryBlock(object sender, byte[] buffer, int bytesInBuffer)
{
leftover = ConcatArray(leftover, buffer, 0, bytesInBuffer);
int newLineIndex = Array.IndexOf(leftover, Delimiter);
if (newLineIndex >= 0)
{
byte[] result = new byte[newLineIndex+1];
Array.Copy(leftover, result, result.Length);
byte[] newLeftover = new byte[leftover.Length - result.Length];
Array.Copy(leftover, newLineIndex + 1, newLeftover, 0, newLeftover.Length);
leftover = newLeftover;
return result;
}
return null;
}
static byte[] ConcatArray(byte[] head, byte[] tail, int tailOffset, int tailCount)
{
byte[] result;
if (head == null)
{
result = new byte[tailCount];
Array.Copy(tail, tailOffset, result, 0, tailCount);
}
else
{
result = new byte[head.Length + tailCount];
head.CopyTo(result, 0);
Array.Copy(tail, tailOffset, result, head.Length, tailCount);
}
return result;
}
}
I ran into this issue in 2008 talking to GPS modules. Essentially the blocking functions are flaky and the solution is to use APM.
Here are the gory details in another Stack Overflow answer: How to do robust SerialPort programming with .NET / C#?
You may also find this of interest: How to kill off a pending APM operation
so I just found a really weird issue in my app and it turns out it was caused by the .NET Native compiler for some reason.
I have a method that compares the content of two files, and it works fine. With two 400KBs files, it takes like 0.4 seconds to run on my Lumia 930 in Debug mode. But, when in Release mode, it takes up to 17 seconds for no apparent reason. Here's the code:
// Compares the content of the two streams
private static async Task<bool> ContentEquals(ulong size, [NotNull] Stream fileStream, [NotNull] Stream testStream)
{
// Initialization
const int bytes = 8;
int iterations = (int)Math.Ceiling((double)size / bytes);
byte[] one = new byte[bytes];
byte[] two = new byte[bytes];
// Read all the bytes and compare them 8 at a time
for (int i = 0; i < iterations; i++)
{
await fileStream.ReadAsync(one, 0, bytes);
await testStream.ReadAsync(two, 0, bytes);
if (BitConverter.ToUInt64(one, 0) != BitConverter.ToUInt64(two, 0)) return false;
}
return true;
}
/// <summary>
/// Checks if the content of two files is the same
/// </summary>
/// <param name="file">The source file</param>
/// <param name="test">The file to test</param>
public static async Task<bool> ContentEquals([NotNull] this StorageFile file, [NotNull] StorageFile test)
{
// If the two files have a different size, just stop here
ulong size = await file.GetFileSizeAsync();
if (size != await test.GetFileSizeAsync()) return false;
// Open the two files to read them
try
{
// Direct streams
using (Stream fileStream = await file.OpenStreamForReadAsync())
using (Stream testStream = await test.OpenStreamForReadAsync())
{
return await ContentEquals(size, fileStream, testStream);
}
}
catch (UnauthorizedAccessException)
{
// Copy streams
StorageFile fileCopy = await file.CreateCopyAsync(ApplicationData.Current.TemporaryFolder);
StorageFile testCopy = await file.CreateCopyAsync(ApplicationData.Current.TemporaryFolder);
using (Stream fileStream = await fileCopy.OpenStreamForReadAsync())
using (Stream testStream = await testCopy.OpenStreamForReadAsync())
{
// Compare the files
bool result = await ContentEquals(size, fileStream, testStream);
// Delete the temp files at the end of the operation
Task.Run(() =>
{
fileCopy.DeleteAsync(StorageDeleteOption.PermanentDelete).Forget();
testCopy.DeleteAsync(StorageDeleteOption.PermanentDelete).Forget();
}).Forget();
return result;
}
}
}
Now, I have absolutely no idea why this same exact method goes from 0.4 seconds all the way up to more than 15 seconds when compile with the .NET Native toolchain.
I fixed this issue using a single ReadAsync call to read the entire files, then I generated two MD5 hashes from the results and compared the two. This approach worked in around 0.4 seconds on my Lumia 930 even in Release mode.
Still, I'm curious about this issue and I'd like to know why it was happening.
Thank you in advance for your help!
EDIT: so I've tweaked my method in order to reduce the number of actual IO operations, this is the result and it looks like it's working fine so far.
private static async Task<bool> ContentEquals(ulong size, [NotNull] Stream fileStream, [NotNull] Stream testStream)
{
// Initialization
const int bytes = 102400;
int iterations = (int)Math.Ceiling((double)size / bytes);
byte[] first = new byte[bytes], second = new byte[bytes];
// Read all the bytes and compare them 8 at a time
for (int i = 0; i < iterations; i++)
{
// Read the next data chunk
int[] counts = await Task.WhenAll(fileStream.ReadAsync(first, 0, bytes), testStream.ReadAsync(second, 0, bytes));
if (counts[0] != counts[1]) return false;
int target = counts[0];
// Compare the first bytes 8 at a time
int j;
for (j = 0; j < target; j += 8)
{
if (BitConverter.ToUInt64(first, j) != BitConverter.ToUInt64(second, j)) return false;
}
// Compare the bytes in the last chunk if necessary
while (j < target)
{
if (first[j] != second[j]) return false;
j++;
}
}
return true;
}
Reading eight bytes at a time from an I/O device is a performance disaster. That's why we are using buffered reading (and writing) in the first place. It takes time for an I/O request to be submitted, processed, executed and finally returned.
OpenStreamForReadAsync appears to not be using a buffered stream. So your 8-byte requests are actually requesting 8 bytes at a time. Even with the solid-state drive, this is very slow.
You don't need to read the whole file at once, though. The usual approach is to find a reasonable buffer size to pre-read; something like reading 1 kiB at a time should fix your whole issue without requiring you to load the whole file in memory at once. You can use BufferedStream between the file and your reading to handle this for you. And if you're feeling adventurous, you could issue the next read request before the CPU processing is done - though it's very likely that this isn't going to help your performance much, given how much of the work is just I/O.
It also seems that .NET native has a lot bigger overhead than managed .NET for asynchronous I/O in the first place, which would make those tiny asynchronous calls all the more of a problem. Fewer requests of larger data will help.
I am writing a WPF application in c# and I need to move some files--the rub is that I really REALLY need to know if the files make it. To do this, I wrote a check that makes sure that the file gets to the target directory after the move--the problem is that sometimes I get to the check before the file finishes moving:
System.IO.File.Move(file.FullName, endLocationWithFile);
System.IO.FileInfo[] filesInDirectory = endLocation.GetFiles();
foreach (System.IO.FileInfo temp in filesInDirectory)
{
if (temp.Name == shortFileName)
{
return true;
}
}
// The file we sent over has not gotten to the correct directory....something went wrong!
throw new IOException("File did not reach destination");
}
catch (Exception e)
{
//Something went wrong, return a fail;
logger.writeErrorLog(e);
return false;
}
Could somebody tell me how to make sure that the file actually gets to the destination?--The files that I will be moving could be VERY large--(Full HD mp4 files of up to 2 hours)
Thanks!
You could use streams with Aysnc Await to ensure the file is completely copied
Something like this should work:
private void Button_Click(object sender, RoutedEventArgs e)
{
string sourceFile = #"\\HOMESERVER\Development Backup\Software\Microsoft\en_expression_studio_4_premium_x86_dvd_537029.iso";
string destinationFile = "G:\\en_expression_studio_4_premium_x86_dvd_537029.iso";
MoveFile(sourceFile, destinationFile);
}
private async void MoveFile(string sourceFile, string destinationFile)
{
try
{
using (FileStream sourceStream = File.Open(sourceFile, FileMode.Open))
{
using (FileStream destinationStream = File.Create(destinationFile))
{
await sourceStream.CopyToAsync(destinationStream);
if (MessageBox.Show("I made it in one piece :), would you like to delete me from the original file?", "Done", MessageBoxButton.YesNo) == MessageBoxResult.Yes)
{
sourceStream.Close();
File.Delete(sourceFile);
}
}
}
}
catch (IOException ioex)
{
MessageBox.Show("An IOException occured during move, " + ioex.Message);
}
catch (Exception ex)
{
MessageBox.Show("An Exception occured during move, " + ex.Message);
}
}
If your using VS2010 you will have to install Async CTP to use the new Async/Await syntax
You could watch for the files to disappear from the original directory, and then confirm that they indeed appeared in the target directory.
I have not had great experience with file watchers. I would probably have the thread doing the move wait for an AutoResetEvent while a separate thread or timer runs to periodically check for the files to disappear from the original location, check that they are in the new location, and perhaps (depending on your environment and needs) perform a consistency check (e.g. MD5 check) of the files. Once those conditions are satisfied, the "checker" thread/timer would trigger the AutoResetEvent so that the original thread can progress.
Include some "this is taking way too long" logic in the "checker".
Why not manage the copy yourself by copying streams?
//http://www.dotnetthoughts.net/writing_file_with_non_cache_mode_in_c/
const FileOptions FILE_FLAG_NO_BUFFERING = (FileOptions) 0x20000000;
//experiment with different buffer sizes for optimal speed
var bufLength = 4096;
using(var outFile =
new FileStream(
destPath,
FileMode.Create,
FileAccess.Write,
FileShare.None,
bufLength,
FileOptions.WriteThrough | FILE_FLAG_NO_BUFFERING))
using(var inFile = File.OpenRead(srcPath))
{
//either
//inFile.CopyTo(outFile);
//or
var fileSizeInBytes = inFile.Length;
var buf = new byte[bufLength];
long totalCopied = 0L;
int amtRead;
while((amtRead = inFile.Read(buf,0,bufLength)) > 0)
{
outFile.Write(buf,0,amtRead);
totalCopied += amtRead;
double progressPct =
Convert.ToDouble(totalCopied) * 100d / fileSizeInBytes;
progressPct.Dump();
}
}
//file is written
You most likely want the move to happen in a separate thread so that you aren't stopping the execution of your application for hours.
If the program cannot continue without the move being completed, then you could open a dialog and check in on the move thread periodically to update a progress tracker. This provides the user with feedback and will prevent them from feeling as if the program has frozen.
There's info and an example on this here:
http://hintdesk.com/c-wpf-copy-files-with-progress-bar-by-copyfileex-api/
try checking periodically in a background task whether the copied file
size reached the file size of the original file (you can add hashes comparing between the files)
Got similar problem recently.
OnBackupStarts();
//.. do stuff
new TaskFactory().StartNew(() =>
{
OnBackupStarts()
//.. do stuff
OnBackupEnds();
});
void OnBackupEnds()
{
if (BackupChanged != null)
{
BackupChanged(this, new BackupChangedEventArgs(BackupState.Done));
}
}
do not wait, react to event
In first place, consider that Moving files in an operating system does not “recreates” the file in the new directory, but only changes its location data in the “files allocation table”, as physically copy all bytes to delete old ones is just a waste of time.
Due to that reason, moving files is a very fast process, no matter the file size.
EDIT: As Mike Christiansen states in his comment, this "speedy" process only happens when files are moving inside the same volume (you know, C:\... to C:\...)
Thus, copy/delete behavior as proposed by “sa_ddam213” in his response will work but is not the optimal solution (takes longer to finish, will not work if for example you don’t have enough free disk to make the copy of the file while the old one exists, …).
MSDN documentation about File.Move(source,destination) method does not specifies if it waits for completion, but the code given as example makes a simple File.Exists(…) check, saying that having there the original file “is unexpected”:
// Move the file.
File.Move(path, path2);
Console.WriteLine("{0} was moved to {1}.", path, path2);
// See if the original exists now.
if (File.Exists(path))
{
Console.WriteLine("The original file still exists, which is unexpected.");
}
else
{
Console.WriteLine("The original file no longer exists, which is expected.");
}
Perhaps, you could use a similar approach to this one, checking in a while loop for the existence of the new file, and the non existence of the old one, giving a “timer” exit for the loop just in case something unexpected happens at operating system level, and the files get lost:
// We perform the movement of the file
File.Move(source,destination);
// Sets an "exit" datetime, after wich the loop will end, for example 15 seconds. The moving process should always be quicker than that if files are in the same volume, almost immediate, but not if they are in different ones
DateTime exitDateTime = DateTime.Now.AddSeconds(15);
bool exitLoopByExpiration = false;
// We stops here until copy is finished (by checking fies existence) or the time limit excedes
while (File.Exists(source) && !File.Exists(destination) && !exitLoopByExpiration ) {
// We compare current datetime with the exit one, to see if we reach the exit time. If so, we set the flag to exit the loop by expiration time, not file moving
if (DateTime.Now.CompareTo(exitDateTime) > 0) { exitLoopByExpiration = true; }
}
//
if (exitLoopByExpiration) {
// We can perform extra work here, like log problems or throw exception, if the loop exists becouse of time expiration
}
I have checked this solution and seems to work without problems.
this is a continuation of part 3
Write file need to optimised for heavy traffic part 3
as my code changed somewhat i think it is better to open a new thread.
public class memoryStreamClass
{
static MemoryStream ms1 = new MemoryStream();
static MemoryStream ms2 = new MemoryStream();
static int c = 1;
public void fillBuffer(string outputString)
{
byte[] outputByte = Encoding.ASCII.GetBytes(outputString);
if (c == 1)
{
ms1.Write(outputByte, 0, outputByte.Length);
if (ms1.Length > 8100)
{
c = 2;
Thread thread1 = new Thread(() => emptyBuffer(ref ms1));
thread1.Start();
}
}
else
{
ms2.Write(outputByte, 0, outputByte.Length);
if (ms2.Length > 8100)
{
c = 1;
Thread thread2 = new Thread(() => emptyBuffer(ref ms2));
thread2.Start();
}
}
}
void emptyBuffer(ref MemoryStream ms)
{
FileStream outStream = new FileStream(string.Format("c:\\output.txt", FileMode.Append);
ms.WriteTo(outStream);
outStream.Flush();
outStream.Close();
ms.SetLength(0);
ms.Position = 0;
Console.WriteLine(ms.Position);
}
there are 2 things i have changed changed from the code in part 3.
the class and method is changed to non-static, the variables are still static tho.
i have move the memorystream reset length into the emptyBuffer method, and i use a ref parameter to pass the reference instead of a copy to the method.
this code compiled fine and runs ok. However, i run it side by side with my single thread program, using 2 computers, one computer runs the single thread, and one computer runs the multithread version, on the same network. i run it for around 5 mins. and the single threaded version collects 8333KB of data while the multithread version collects only 8222KB of data. (98.6% of the single thread version)
its first time i have do any performance comparison between the 2 version. Maybe a should run more test to confirm it. but base on looking the code, any masters out there will point out any problem?
i haven't putting any code on lock or threadpooling at the moment, maybe i should, but if the code runs fine, i dont want to change it and break it. the only thing i will change is the buffer size, so i will eliminate any chance of the buffer fill up before the other is emptied.
any comments on my code?
The problem is still static state. You're clearing buffers that could have data that wasn't written to disk.
I imagine this scenario is happening 1.4% of the time.
ms1 fills up, empty buffer1 thread started, switch to ms2
empty buffer1 is writing to disk
ms2 fills up, empty buffer2 thread started, switch to ms1
empty buffer1 to disk finishes
ms1 is cleared while it is the active stream
When doing multi-threaded programming, static classes are fine but static state is not. Ideally you have no shared memory between threads and your code is entirely dependent on it.
Think of it this way -- if you're expecting a value to consistently change, it's not exactly static is it?
I run through millions of records and sometimes I have to debug using Console.WriteLine to see what is going on.
However, Console.WriteLine is very slow, considerably slower than writing to a file.
BUT it is very convenient - does anyone know of a way to speed it up?
If it is just for debugging purposes you should use Debug.WriteLine instead. This will most likely be a bit faster than using Console.WriteLine.
Example
Debug.WriteLine("There was an error processing the data.");
You can use the OutputDebugString API function to send a string to the debugger. It doesn't wait for anything to redraw and this is probably the fastest thing you can get without digging into the low-level stuff too much.
The text you give to this function will go into Visual Studio Output window.
[DllImport("kernel32.dll")]
static extern void OutputDebugString(string lpOutputString);
Then you just call OutputDebugString("Hello world!");
Do something like this:
public static class QueuedConsole
{
private static StringBuilder _sb = new StringBuilder();
private static int _lineCount;
public void WriteLine(string message)
{
_sb.AppendLine(message);
++_lineCount;
if (_lineCount >= 10)
WriteAll();
}
public void WriteAll()
{
Console.WriteLine(_sb.ToString());
_lineCount = 0;
_sb.Clear();
}
}
QueuedConsole.WriteLine("This message will not be written directly, but with nine other entries to increase performance.");
//after your operations, end with write all to get the last lines.
QueuedConsole.WriteAll();
Here is another example: Does Console.WriteLine block?
I recently did a benchmark battery for this on .NET 4.8. The tests included many of the proposals mentioned on this page, including Async and blocking variants of both BCL and custom code, and then most of those both with and without dedicated threading, and finally scaled across power-of-2 buffer sizes.
The fastest method, now used in my own projects, buffers 64K of wide (Unicode) characters at a time from .NET directly to the Win32 function WriteConsoleW without copying or even hard-pinning. Remainders larger than 64K, after filling and flushing one buffer, are also sent directly, and in-situ as well. The approach deliberately bypasses the Stream/TextWriter paradigm so it can (obviously enough) provide .NET text that is already Unicode to a (native) Unicode API without all the superfluous memory copying/shuffling and byte[] array allocations required for first "decoding" to a byte stream.
If there is interest (perhaps because the buffering logic is slightly intricate), I can provide the source for the above; it's only about 80 lines. However, my tests determined that there's a simpler way to get nearly the same performance, and since it doesn't require any Win32 calls, I'll show this latter technique instead.
The following is way faster than Console.Write:
public static class FastConsole
{
static readonly BufferedStream str;
static FastConsole()
{
Console.OutputEncoding = Encoding.Unicode; // crucial
// avoid special "ShadowBuffer" for hard-coded size 0x14000 in 'BufferedStream'
str = new BufferedStream(Console.OpenStandardOutput(), 0x15000);
}
public static void WriteLine(String s) => Write(s + "\r\n");
public static void Write(String s)
{
// avoid endless 'GetByteCount' dithering in 'Encoding.Unicode.GetBytes(s)'
var rgb = new byte[s.Length << 1];
Encoding.Unicode.GetBytes(s, 0, s.Length, rgb, 0);
lock (str) // (optional, can omit if appropriate)
str.Write(rgb, 0, rgb.Length);
}
public static void Flush() { lock (str) str.Flush(); }
};
Note that this is a buffered writer, so you must call Flush() when you have no more text to write.
I should also mention that, as shown, technically this code assumes 16-bit Unicode (UCS-2, as opposed to UTF-16) and thus won't properly handle 4-byte escape surrogates for characters beyond the Basic Multilingual Plane. The point hardly seems important given the more extreme limitations on console text display in general, but could perhaps still matter for piping/redirection.
Usage:
FastConsole.WriteLine("hello world.");
// etc...
FastConsole.Flush();
On my machine, this gets about 77,000 lines/second (mixed-length) versus only 5,200 lines/sec under identical conditions for normal Console.WriteLine. That's a factor of almost 15x speedup.
These are controlled comparison results only; note that absolute measurements of console output performance are highly variable, depending on the console window settings and runtime conditions, including size, layout, fonts, DWM clipping, etc.
Why Console is slow:
Console output is actually an IO stream that's managed by your operating system. Most IO classes (like FileStream) have async methods but the Console class was never updated so it always blocks the thread when writing.
Console.WriteLine is backed by SyncTextWriter which uses a global lock to prevent multiple threads from writing partial lines. This is a major bottleneck that forces all threads to wait for each other to finish the write.
If the console window is visible on screen then there can be significant slowdown because the window needs to be redrawn before the console output is considered flushed.
Solutions:
Wrap the Console stream with a StreamWriter and then use async methods:
var sw = new StreamWriter(Console.OpenStandardOutput());
await sw.WriteLineAsync("...");
You can also set a larger buffer if you need to use sync methods. The call will occasionally block when the buffer gets full and is flushed to the stream.
// set a buffer size
var sw = new StreamWriter(Console.OpenStandardOutput(), Encoding.UTF8, 8192);
// this write call will block when buffer is full
sw.Write("...")
If you want the fastest writes though, you'll need to make your own buffer class that writes to memory and flushes to the console asynchronously in the background using a single thread without locking. The new Channel<T> class in .NET Core 2.1 makes this simple and fast. Plenty of other questions showing that code but comment if you need tips.
A little old thread and maybe not exactly what the OP is looking for, but I ran into the same question recently, when processing audio data in real time.
I compared Console.WriteLine to Debug.WriteLine with this code and used DebugView as a dos box alternative. It's only an executable (nothing to install) and can be customized in very neat ways (filters & colors!). It has no problems with tens of thousands of lines and manages the memory quite well (I could not find any kind of leak, even after days of logging).
After doing some testing in different environments (e.g.: virtual machine, IDE, background processes running, etc) I made the following observations:
Debug is almost always faster
For small bursts of lines (<1000), it's about 10 times faster
For larger chunks it seems to converge to about 3x
If the Debug output goes to the IDE, Console is faster :-)
If DebugView is not running, Debug gets even faster
For really large amounts of consecutive outputs (>10000), Debug gets slower and Console stays constant. I presume this is due to the memory, Debug has to allocate and Console does not.
Obviously, it makes a difference if DebugView is actually "in-view" or not, as the many gui updates have a significant impact on the overall performance of the system, while Console simply hangs, if visible or not. But it's hard to put numbers on that one...
I did not try multiple threads writing to the Console, as I think this should generally avoided. I never had (performance) problems when writing to Debug from multiple threads.
If you compile with Release settings, usually all Debug statements are omitted and Trace should produce the same behaviour as Debug.
I used VS2017 & .Net 4.6.1
Sorry for so much code, but I had to tweak it quite a lot to actually measure what I wanted to. If you can spot any problems with the code (biases, etc.), please comment. I would love to get more precise data for real life systems.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;
namespace Console_vs_Debug {
class Program {
class Trial {
public string name;
public Action console;
public Action debug;
public List < float > consoleMeasuredTimes = new List < float > ();
public List < float > debugMeasuredTimes = new List < float > ();
}
static Stopwatch sw = new Stopwatch();
private static int repeatLoop = 1000;
private static int iterations = 2;
private static int dummy = 0;
static void Main(string[] args) {
if (args.Length == 2) {
repeatLoop = int.Parse(args[0]);
iterations = int.Parse(args[1]);
}
// do some dummy work
for (int i = 0; i < 100; i++) {
Console.WriteLine("-");
Debug.WriteLine("-");
}
for (int i = 0; i < iterations; i++) {
foreach(Trial trial in trials) {
Thread.Sleep(50);
sw.Restart();
for (int r = 0; r < repeatLoop; r++)
trial.console();
sw.Stop();
trial.consoleMeasuredTimes.Add(sw.ElapsedMilliseconds);
Thread.Sleep(1);
sw.Restart();
for (int r = 0; r < repeatLoop; r++)
trial.debug();
sw.Stop();
trial.debugMeasuredTimes.Add(sw.ElapsedMilliseconds);
}
}
Console.WriteLine("---\r\n");
foreach(Trial trial in trials) {
var consoleAverage = trial.consoleMeasuredTimes.Average();
var debugAverage = trial.debugMeasuredTimes.Average();
Console.WriteLine(trial.name);
Console.WriteLine($ " console: {consoleAverage,11:F4}");
Console.WriteLine($ " debug: {debugAverage,11:F4}");
Console.WriteLine($ "{consoleAverage / debugAverage,32:F2} (console/debug)");
Console.WriteLine();
}
Console.WriteLine("all measurements are in milliseconds");
Console.WriteLine("anykey");
Console.ReadKey();
}
private static List < Trial > trials = new List < Trial > {
new Trial {
name = "constant",
console = delegate {
Console.WriteLine("A static and constant string");
},
debug = delegate {
Debug.WriteLine("A static and constant string");
}
},
new Trial {
name = "dynamic",
console = delegate {
Console.WriteLine("A dynamically built string (number " + dummy++ + ")");
},
debug = delegate {
Debug.WriteLine("A dynamically built string (number " + dummy++ + ")");
}
},
new Trial {
name = "interpolated",
console = delegate {
Console.WriteLine($ "An interpolated string (number {dummy++,6})");
},
debug = delegate {
Debug.WriteLine($ "An interpolated string (number {dummy++,6})");
}
}
};
}
}
Just a little trick I use sometimes: If you remove focus from the Console window by opening another window over it, and leave it until it completes, it won't redraw the window until you refocus, speeding it up significantly. Just make sure you have the buffer set up high enough that you can scroll back through all of the output.
Try using the System.Diagnostics Debug class? You can accomplish the same things as using Console.WriteLine.
You can view the available class methods here.