Multithreading Unhandled Exceptions - c#

I am working on a project that will kick off multiple independent processes. I would like them to be isolated to the point that if one fails unexpectedly, the others will continue on without being impacted. I have tried a POC (pasted below) to test this using AppDomains but it still crashes the entire parent application.
Am I taking the wrong approach? If so, what should I be doing? If not, what am I doing wrong?
class Program
{
static void Main(string[] args)
{
Random rand = new Random();
Thread[] threads = new Thread[15];
for (int i = 0; i < 15; i++)
{
AppDomain domain = AppDomain.CreateDomain("Test" + i);
domain.UnhandledException += new UnhandledExceptionEventHandler(domain_UnhandledException);
domain.
Test test = domain.CreateInstanceFromAndUnwrap(Assembly.GetExecutingAssembly().Location, "ConsoleApplication1.Test") as Test;
Thread thread = new Thread(new ParameterizedThreadStart(test.DoSomeStuff));
thread.Start(rand.Next());
Console.WriteLine(String.Format("Thread #{0} has started", i));
threads[i] = thread;
}
for (int i = 0; i < 15; i++)
{
threads[i].Join();
Console.WriteLine(String.Format("Thread #{0} has finished", i));
}
Console.ReadLine();
}
static void domain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
Console.WriteLine("UNHANDLED");
}
}
public class Test : MarshalByRefObject
{
public void DoSomeStuff(object state)
{
int loops = (int)state;
for (int i = 0; i < loops; i++)
{
if (i % 300 == 0)
{
// WILL break
Console.WriteLine("Breaking");
int val = i / (i % 300);
}
}
}
}
EDIT
Please note that the "Test" class is extremely simplified. The actual implimentation would be extremely complex and have a very likely gap in exception handling.

You don't need separate AppDomains. All you need to do is to catch exceptions in the DoSomeStuff member of the Test class. Thus if one of these threads handles its own exception, then rest of your app can continue running.

You have to catch the exception in the thread that raised it, there's no way around that. What you need to do next is probably serialize the exception back to the primary appdomain so it is aware of it. After all, it set off the thread to get some kind of job done and that job wasn't completed. Something ought to be done about that.
What you are emulating is the way that SQL Server and ASP.NET work. They have a very nice execution model. They accept requests to perform work from client machines. If that request bombs, they have the luxury of sending back a "sorry, it didn't work" message back. And shrug it off like it never happened, so nicely supported by appdomains.
Leaving it up to the client machine to deal with that. Not infrequently requiring the assistance of a human btw. Easy peasy, but it wasn't an accident they were designed that way. Truly emulating this execution model also requires finding somebody else to deal with the misery. That's difficult.

Related

ThreadPool race-condition, closure, locking or something else?

Still pretty new to threads so I'm sure it is one of those little gotchas and a repeat question, but I have been unable to find the answer browsing the threads.
I have a port scanner app in C#.
I'm using threadpools to spin up a new TcpClient for each port and probe if it's open.
After suffering through the concepts of closures and thread synchronization, I am having an issue where when multiple threads try to save their results to different indexes in the Orchestrator.hosts (List).
I have multiple threads trying to update a single List results object. My understanding is this is fine as long as I lock the object on write, however I'm finding that on some updates, multiple entries are getting the same update.
IE, Thread #1 supposed to update Hosts[0].Ports[0].Status to "Open",
What happens:
Thread #1 updates multiple host with the port result despite passing a specific index for Hosts.
Hosts[0].Ports[0].Status to "Open",
Hosts[1].Ports[0].Status to "Open",
Hosts[2].Ports[0].Status to "Open",
Not sure where my problem is. The Static method I'm calling to perform a probe of a given port
public static void ScanTCPPorts()
{
// Create a list of portsToScan objects to send to thread workers
//List<ScanPortRequest> portsToScan = new List<ScanPortRequest>();
using (ManualResetEvent resetEvent = new ManualResetEvent(false))
{
int toProcess = 0;
for (var i = 0; i < hostCount; i++) // Starting at Begining
{
int currentHostId = i;
// To hold our current hosts ID (Assign outside of threaded function to avoid race-condition)
if (hosts[i].IsAlive || scanDefinition.isForced())
{
int portCount = hosts[i].Ports.Count;
for (int p = 0; p < portCount; p++)
{
// Thread-safe Increment our workQueue counter
Interlocked.Increment(ref toProcess);
int currentPortPosition = p;
// We need to send the arrayIndex in to the thread function
PortScanRequestResponse portRequestResponse = new PortScanRequestResponse(hosts[currentHostId], currentHostId, hosts[currentHostId].Ports[currentPortPosition], currentPortPosition);
ThreadPool.QueueUserWorkItem(
new WaitCallback(threadedRequestResponseInstance => {
PortScanRequestResponse portToScan = threadedRequestResponseInstance as PortScanRequestResponse;
PortScanRequestResponse threadResult = PortScanner.scanTCPPort(portToScan);
// Lock so Thread-safe update to result
lock (Orchestrator.hosts[portToScan.hostResultIndex])
{
if (threadResult.port.status == PortStatus.Open)
{
// Update result
Orchestrator.hosts[portToScan.hostResultIndex].Ports[portToScan.portResultIndex].status = PortStatus.Open;
//Logger.Log(hosts[currentHostId].IPAddress + " " + hosts[currentHostId].Ports[currentPortPosition].type + " " + hosts[currentHostId].Ports[currentPortPosition].portNumber + " is open");
}
else
{
Orchestrator.hosts[portToScan.hostResultIndex].Ports[portToScan.portResultIndex].status = PortStatus.Closed;
}
// Check if this was the last scan for the given host
if (Orchestrator.hosts[portToScan.hostResultIndex].PortScanComplete != true)
{
if (Orchestrator.hosts[portToScan.hostResultIndex].isCompleted())
{
Orchestrator.hosts[portToScan.hostResultIndex].PortScanComplete = true;
// Logger.Log(hosts[currentHostId].IPAddress + " has completed a port scan");
Orchestrator.hosts[portToScan.hostResultIndex].PrintPortSummery();
}
}
}
// Safely decrement the counter
if (Interlocked.Decrement(ref toProcess) == 0)
resetEvent.Set();
}), portRequestResponse); // Pass in our Port to scan
}
}
}
resetEvent.WaitOne();
}
}
Here is the worker process in a separate public static class.
public static PortScanRequestResponse scanTCPPort(object portScanRequest) {
PortScanRequestResponse portScanResponse = portScanRequest as PortScanRequestResponse;
HostDefinition host = portScanResponse.host;
ScanPort port = portScanResponse.port;
try
{
using (TcpClient threadedClient = new TcpClient())
{
try
{
IAsyncResult result = threadedClient.BeginConnect(host.IPAddress, port.portNumber, null, null);
Boolean success = result.AsyncWaitHandle.WaitOne(Orchestrator.scanDefinition.GetPortTimeout(), false);
if (threadedClient.Client != null)
{
if (success)
{
threadedClient.EndConnect(result);
threadedClient.Close();
portScanResponse.port.status = PortStatus.Open;
return portScanResponse;
}
}
} catch { }
}
}
catch
{ }
portScanResponse.port.status = PortStatus.Closed;
return portScanResponse;
}
Originally I was pulling the host index from a free variable, thinking this was the problem moved it to inside the delegate.
I tried locking the Hosts object everywhere there was a write.
I have tried different thread sync techniques (CountdownEvent and ManualResetEvent).
I think there is just some fundamental threading principal I have not been introduced to yet, or I have made a very simple logic mistake.
I have multiple threads trying to update a single List results object. My understanding is this is fine as long as I lock the object on write.
I haven't studied your code, but the above statement alone is incorrect. When a List<T>, or any other non-thread-safe object , is used in a multithreaded environment, all interactions with the object must be synchronized. Only one thread at a time should be allowed to interact with the object. Both writes and reads must be enclosed in lock statements, using the same locker object. Even reading the Count must be synchronized. Otherwise the usage is erroneous, and the behavior of the program is undefined.
I was hyper-focused on it being a thread issue because this was my firsts threaded project. Turned out to be that I didn't realize copies of a List<> objects are references to their original object (reference type). I assumed my threads were accessing my save structure in an unpredictable way, but my arrays of ports were all referencing the same object.
This was a "reference type" vs "value type" issue on my List<> of ports.

What is the correct way to use USBHIDDRIVER for multiple writes?

I am writing an application that needs to write messages to a USB HID device and read responses. For this purpose, I'm using USBHIDDRIVER.dll (https://www.leitner-fischer.com/2007/08/03/hid-usb-driver-library/ )
Now it works fine when writing many of the message types - i.e. short ones.
However, there is one type of message where I have to write a .hex file containing about 70,000 lines. The protocol requires that each line needs to be written individually and sent in a packet containing other information (start, end byte, checksum)
However I'm encountering problems with this.
I've tried something like this:
private byte[] _responseBytes;
private ManualResetEvent _readComplete;
public byte[][] WriteMessage(byte[][] message)
{
byte[][] devResponse = new List<byte[]>();
_readComplete = new ManualResetEvent(false);
for (int i = 0; i < message.Length; i++)
{
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
usbHid = null;
}
return devResponse;
}
private void ReadEvent()
{
if (_readComplete!= null)
{
_readComplete.Set();
}
_microHidReadBytes = (byte[])((ListWithEvent)sender)[0];
}
This appears to work. In WireShark I can see the messages going back and forth. However as you can see it's creating an instance of the USBInterface class every iteration. This seems very clunky and I can see in the TaskManager, it starts to eat up a lot of memory - current run has it above 1GB and eventually it falls over with an OutOfMemory exception. It is also very slow. Current run is not complete after about 15 mins, although I've seen another application do the same job in less than one minute.
However, if I move the creation and connection of the USBInterface out of the loop as in...
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
for (int i = 0; i < message.Length; i++)
{
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
}
usbHid = null;
... now what happens is it only allows me to do one write! I write the data, read the response and when it comes around the loop to write the second message, the application just hangs in the write() function and never returns. (Doesn't even time out)
What is the correct way to do this kind of thing?
(BTW I know it's adding a lot of data to that devResponse object but this is not the source of the issue - if I remove it, it still consumes an awful lot of memory)
UPDATE
I've found that if I don't enable reading, I can do multiple writes without having to create a new USBInterface1 object with each iteration. This is an improvement but I'd still like to be able to read each response. (I can see they are still sent down in Wireshark)

Interprocess communication on the same box - communication between 2 applications or processes

What is the best way to implement interprocess communication between applications that are on the same box -- both written in c#?
The manager application will be sending commands such as: stop, start to the other applications. It will also be monitoring the applications and possibly asking for data.
All applications will be on the same machine running on windows 7 OS.
Is IPC Channel a good choice for this? Is named Pipes a better choice? Or is sockets a better choice?
The applications that are being managed all have the same name. When they start up, they load in a dll that determines which algorithms are run.
Thanks
Boling it down, use WCF with Named Pipes.
Here is a link that has some meat to it regarding exactly this question: What is the best choice for .NET inter-process communication?
One way to do it, especially if you want to communicate between different processes spun off the same application - using MemoryMappedFile. And here is simplest example - Place it into console app. Start 2 instances of it and in quick succession, type w+enter in one and r+enter in the other. Watch. Note – I de-synchronized read and write timing so that you can see that sometimes data changes and other times - not
class Program
{
private static MemoryMappedFile _file = MemoryMappedFile.CreateOrOpen("XXX_YYY", 1, MemoryMappedFileAccess.ReadWrite);
private static MemoryMappedViewAccessor _view = _file.CreateViewAccessor();
static void Main(string[] args)
{
askforinput:
Console.WriteLine("R for read, W for write");
string input = Console.ReadLine();
if (string.Equals(input, "r", StringComparison.InvariantCultureIgnoreCase))
StartReading();
else if (string.Equals(input, "w", StringComparison.InvariantCultureIgnoreCase))
StartWriting();
else
goto askforinput;
_view.Dispose();
_file.Dispose();
}
private static void StartReading()
{
bool currVal = false;
for (int i = 0; i < 100; i++)
{
currVal = currVal != true;
Console.WriteLine(_view.ReadBoolean(0));
Thread.Sleep(221);
}
}
private static void StartWriting()
{
bool currVal = false;
for (int i = 0; i < 100; i++)
{
currVal = currVal != true;
_view.Write(0, currVal);
Console.WriteLine("Writen: " + currVal.ToString());
Thread.Sleep(500);
}
}
}
Again, you can build very complex and robust communication layer using Memory Mapped Files. This is just simple example.

How to optimize SOA requests in HPC

I want to use HPC to do some simulations, I'm going to use SOA. I have following code from some sample materials, I modified it (I added this first for). Currently I stumbled upon problem of optimization / poor performance. This basic sample do nothing expect querying service method, this method return value it gets in parameter. However my example is slow. I have 60 computers with 4 core processors and 1Gb network. First phase of sending messages takes something about 2 seconds and then I have to wait another 7 seconds for return values. All values come leas or more at the same time. Another problem I have is that I cannot re-use session object, that is why this first for is outside using I want to put it inside using, but then I get time out, or information that BrokerClient is ended.
Can I reuse BrokerClient or DurableSession object.
How can I speed up this whole process of message passing ?
static void Main(string[] args)
{
const string headnode = "Head-Node.hpcCluster.edu.edu";
const string serviceName = "EchoService";
const int numRequests = 1000;
SessionStartInfo info = new SessionStartInfo(headnode, serviceName);
for (int j = 0; j < 100; j++)
{
using (DurableSession session = DurableSession.CreateSession(info))
{
Console.WriteLine("done session id = {0}", session.Id);
NetTcpBinding binding = new NetTcpBinding(SecurityMode.Transport);
using (BrokerClient<IService1> client = new BrokerClient<IService1>(session, binding))
{
for (int i = 0; i < numRequests; i++)
{
EchoRequest request = new EchoRequest("hello world!");
client.SendRequest<EchoRequest>(request, i);
}
client.EndRequests();
foreach (var response in client.GetResponses<EchoResponse>())
{
try
{
string reply = response.Result.EchoResult;
Console.WriteLine("\tReceived response for request {0}: {1}", response.GetUserData<int>(), reply);
}
catch (Exception ex)
{
}
}
}
session.Close();
}
}
}
Second version with Session instead of DurableSession, which is working better, but I have problem with Session reuse:
using (Session session = Session.CreateSession(info))
{
for (int i = 0; i < 100; i++)
{
count = 0;
Console.WriteLine("done session id = {0}", session.Id);
NetTcpBinding binding = new NetTcpBinding(SecurityMode.Transport);
using (BrokerClient<IService1> client = new BrokerClient<IService1>( session, binding))
{
//set getresponse handler
client.SetResponseHandler<EchoResponse>((item) =>
{
try
{
Console.WriteLine("\tReceived response for request {0}: {1}",
item.GetUserData<int>(), item.Result.EchoResult);
}
catch (SessionException ex)
{
Console.WriteLine("SessionException while getting responses in callback: {0}", ex.Message);
}
catch (Exception ex)
{
Console.WriteLine("Exception while getting responses in callback: {0}", ex.Message);
}
if (Interlocked.Increment(ref count) == numRequests)
done.Set();
});
// start to send requests
Console.Write("Sending {0} requests...", numRequests);
for (int j = 0; j < numRequests; j++)
{
EchoRequest request = new EchoRequest("hello world!");
client.SendRequest<EchoRequest>(request, i);
}
client.EndRequests();
Console.WriteLine("done");
Console.WriteLine("Retrieving responses...");
// Main thread block here waiting for the retrieval process
// to complete. As the thread that receives the "numRequests"-th
// responses does a Set() on the event, "done.WaitOne()" will pop
done.WaitOne();
Console.WriteLine("Done retrieving {0} responses", numRequests);
}
}
// Close connections and delete messages stored in the system
session.Close();
}
I get exception during second run of EndRequest: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
Don't use DurableSession for computations where the indivdual requests are shorter than about 30 seconds. A DurableSession will be backed by an MSMQ queue in the broker. Your requests and responses may be round-tripped to disk; this will cause performance problems if your amount of computation per request is small. You should use Session instead.
In general, for performance reasons, don't use DurableSession unless you absolutely need the durable behavior in the broker. In this case, since you are calling GetResponses immediately after SendRequests, Session will work fine for you.
You can reuse a Session or DurableSession object to create any number of BrokerClient objects, as long you haven't called Session.Close.
If it's important to process the responses in parallel on the client side, use BrokerClient.SetResponseHandler to set a callback function which will handle responses asynchronously (rather than use client.GetResponses, which handles them synchronously). Look at the HelloWorldR2 sample code for details.

Console.WriteLine slow

I run through millions of records and sometimes I have to debug using Console.WriteLine to see what is going on.
However, Console.WriteLine is very slow, considerably slower than writing to a file.
BUT it is very convenient - does anyone know of a way to speed it up?
If it is just for debugging purposes you should use Debug.WriteLine instead. This will most likely be a bit faster than using Console.WriteLine.
Example
Debug.WriteLine("There was an error processing the data.");
You can use the OutputDebugString API function to send a string to the debugger. It doesn't wait for anything to redraw and this is probably the fastest thing you can get without digging into the low-level stuff too much.
The text you give to this function will go into Visual Studio Output window.
[DllImport("kernel32.dll")]
static extern void OutputDebugString(string lpOutputString);
Then you just call OutputDebugString("Hello world!");
Do something like this:
public static class QueuedConsole
{
private static StringBuilder _sb = new StringBuilder();
private static int _lineCount;
public void WriteLine(string message)
{
_sb.AppendLine(message);
++_lineCount;
if (_lineCount >= 10)
WriteAll();
}
public void WriteAll()
{
Console.WriteLine(_sb.ToString());
_lineCount = 0;
_sb.Clear();
}
}
QueuedConsole.WriteLine("This message will not be written directly, but with nine other entries to increase performance.");
//after your operations, end with write all to get the last lines.
QueuedConsole.WriteAll();
Here is another example: Does Console.WriteLine block?
I recently did a benchmark battery for this on .NET 4.8. The tests included many of the proposals mentioned on this page, including Async and blocking variants of both BCL and custom code, and then most of those both with and without dedicated threading, and finally scaled across power-of-2 buffer sizes.
The fastest method, now used in my own projects, buffers 64K of wide (Unicode) characters at a time from .NET directly to the Win32 function WriteConsoleW without copying or even hard-pinning. Remainders larger than 64K, after filling and flushing one buffer, are also sent directly, and in-situ as well. The approach deliberately bypasses the Stream/TextWriter paradigm so it can (obviously enough) provide .NET text that is already Unicode to a (native) Unicode API without all the superfluous memory copying/shuffling and byte[] array allocations required for first "decoding" to a byte stream.
If there is interest (perhaps because the buffering logic is slightly intricate), I can provide the source for the above; it's only about 80 lines. However, my tests determined that there's a simpler way to get nearly the same performance, and since it doesn't require any Win32 calls, I'll show this latter technique instead.
The following is way faster than Console.Write:
public static class FastConsole
{
static readonly BufferedStream str;
static FastConsole()
{
Console.OutputEncoding = Encoding.Unicode; // crucial
// avoid special "ShadowBuffer" for hard-coded size 0x14000 in 'BufferedStream'
str = new BufferedStream(Console.OpenStandardOutput(), 0x15000);
}
public static void WriteLine(String s) => Write(s + "\r\n");
public static void Write(String s)
{
// avoid endless 'GetByteCount' dithering in 'Encoding.Unicode.GetBytes(s)'
var rgb = new byte[s.Length << 1];
Encoding.Unicode.GetBytes(s, 0, s.Length, rgb, 0);
lock (str) // (optional, can omit if appropriate)
str.Write(rgb, 0, rgb.Length);
}
public static void Flush() { lock (str) str.Flush(); }
};
Note that this is a buffered writer, so you must call Flush() when you have no more text to write.
I should also mention that, as shown, technically this code assumes 16-bit Unicode (UCS-2, as opposed to UTF-16) and thus won't properly handle 4-byte escape surrogates for characters beyond the Basic Multilingual Plane. The point hardly seems important given the more extreme limitations on console text display in general, but could perhaps still matter for piping/redirection.
Usage:
FastConsole.WriteLine("hello world.");
// etc...
FastConsole.Flush();
On my machine, this gets about 77,000 lines/second (mixed-length) versus only 5,200 lines/sec under identical conditions for normal Console.WriteLine. That's a factor of almost 15x speedup.
These are controlled comparison results only; note that absolute measurements of console output performance are highly variable, depending on the console window settings and runtime conditions, including size, layout, fonts, DWM clipping, etc.
Why Console is slow:
Console output is actually an IO stream that's managed by your operating system. Most IO classes (like FileStream) have async methods but the Console class was never updated so it always blocks the thread when writing.
Console.WriteLine is backed by SyncTextWriter which uses a global lock to prevent multiple threads from writing partial lines. This is a major bottleneck that forces all threads to wait for each other to finish the write.
If the console window is visible on screen then there can be significant slowdown because the window needs to be redrawn before the console output is considered flushed.
Solutions:
Wrap the Console stream with a StreamWriter and then use async methods:
var sw = new StreamWriter(Console.OpenStandardOutput());
await sw.WriteLineAsync("...");
You can also set a larger buffer if you need to use sync methods. The call will occasionally block when the buffer gets full and is flushed to the stream.
// set a buffer size
var sw = new StreamWriter(Console.OpenStandardOutput(), Encoding.UTF8, 8192);
// this write call will block when buffer is full
sw.Write("...")
If you want the fastest writes though, you'll need to make your own buffer class that writes to memory and flushes to the console asynchronously in the background using a single thread without locking. The new Channel<T> class in .NET Core 2.1 makes this simple and fast. Plenty of other questions showing that code but comment if you need tips.
A little old thread and maybe not exactly what the OP is looking for, but I ran into the same question recently, when processing audio data in real time.
I compared Console.WriteLine to Debug.WriteLine with this code and used DebugView as a dos box alternative. It's only an executable (nothing to install) and can be customized in very neat ways (filters & colors!). It has no problems with tens of thousands of lines and manages the memory quite well (I could not find any kind of leak, even after days of logging).
After doing some testing in different environments (e.g.: virtual machine, IDE, background processes running, etc) I made the following observations:
Debug is almost always faster
For small bursts of lines (<1000), it's about 10 times faster
For larger chunks it seems to converge to about 3x
If the Debug output goes to the IDE, Console is faster :-)
If DebugView is not running, Debug gets even faster
For really large amounts of consecutive outputs (>10000), Debug gets slower and Console stays constant. I presume this is due to the memory, Debug has to allocate and Console does not.
Obviously, it makes a difference if DebugView is actually "in-view" or not, as the many gui updates have a significant impact on the overall performance of the system, while Console simply hangs, if visible or not. But it's hard to put numbers on that one...
I did not try multiple threads writing to the Console, as I think this should generally avoided. I never had (performance) problems when writing to Debug from multiple threads.
If you compile with Release settings, usually all Debug statements are omitted and Trace should produce the same behaviour as Debug.
I used VS2017 & .Net 4.6.1
Sorry for so much code, but I had to tweak it quite a lot to actually measure what I wanted to. If you can spot any problems with the code (biases, etc.), please comment. I would love to get more precise data for real life systems.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;
namespace Console_vs_Debug {
class Program {
class Trial {
public string name;
public Action console;
public Action debug;
public List < float > consoleMeasuredTimes = new List < float > ();
public List < float > debugMeasuredTimes = new List < float > ();
}
static Stopwatch sw = new Stopwatch();
private static int repeatLoop = 1000;
private static int iterations = 2;
private static int dummy = 0;
static void Main(string[] args) {
if (args.Length == 2) {
repeatLoop = int.Parse(args[0]);
iterations = int.Parse(args[1]);
}
// do some dummy work
for (int i = 0; i < 100; i++) {
Console.WriteLine("-");
Debug.WriteLine("-");
}
for (int i = 0; i < iterations; i++) {
foreach(Trial trial in trials) {
Thread.Sleep(50);
sw.Restart();
for (int r = 0; r < repeatLoop; r++)
trial.console();
sw.Stop();
trial.consoleMeasuredTimes.Add(sw.ElapsedMilliseconds);
Thread.Sleep(1);
sw.Restart();
for (int r = 0; r < repeatLoop; r++)
trial.debug();
sw.Stop();
trial.debugMeasuredTimes.Add(sw.ElapsedMilliseconds);
}
}
Console.WriteLine("---\r\n");
foreach(Trial trial in trials) {
var consoleAverage = trial.consoleMeasuredTimes.Average();
var debugAverage = trial.debugMeasuredTimes.Average();
Console.WriteLine(trial.name);
Console.WriteLine($ " console: {consoleAverage,11:F4}");
Console.WriteLine($ " debug: {debugAverage,11:F4}");
Console.WriteLine($ "{consoleAverage / debugAverage,32:F2} (console/debug)");
Console.WriteLine();
}
Console.WriteLine("all measurements are in milliseconds");
Console.WriteLine("anykey");
Console.ReadKey();
}
private static List < Trial > trials = new List < Trial > {
new Trial {
name = "constant",
console = delegate {
Console.WriteLine("A static and constant string");
},
debug = delegate {
Debug.WriteLine("A static and constant string");
}
},
new Trial {
name = "dynamic",
console = delegate {
Console.WriteLine("A dynamically built string (number " + dummy++ + ")");
},
debug = delegate {
Debug.WriteLine("A dynamically built string (number " + dummy++ + ")");
}
},
new Trial {
name = "interpolated",
console = delegate {
Console.WriteLine($ "An interpolated string (number {dummy++,6})");
},
debug = delegate {
Debug.WriteLine($ "An interpolated string (number {dummy++,6})");
}
}
};
}
}
Just a little trick I use sometimes: If you remove focus from the Console window by opening another window over it, and leave it until it completes, it won't redraw the window until you refocus, speeding it up significantly. Just make sure you have the buffer set up high enough that you can scroll back through all of the output.
Try using the System.Diagnostics Debug class? You can accomplish the same things as using Console.WriteLine.
You can view the available class methods here.

Categories