Threading Web requests handled in Main? - c#

I'm writing an application in C#, and I am creating multiple BackgroundWorker threads to grab information from webpages. Despite them being BackgroundWorkers, my GUI Form is becoming unresponsive.
When I am debugging, I pause when the program goes unresponsive, and I can see that I am in the Main Thread, and I am paused on the webpage fetching method. This method is only called from new threads, though, so I can’t figure out why I would be there in the Main Thread.
Does this make any sense? What can I do to make sure the web requests are only being handled in their respective threads?
EDIT: some code and explanation
I am processing a large list of addresses. Each thread will be processing one or more addresses. I can choose how many threads I want to create (I keep it modest :))
//in “Controller” class
public void process()
{
for (int i = 1; i <= addressList.Count && i<= numthreads; i++)
{
BackgroundWorker bw = new BackgroundWorker();
bw.DoWork += doWork;
bw.RunWorkerAsync((object)i);
}
}
public void doWork(object sender, DoWorkEventArgs e)
{
//create an object that has the web fetching method, call it WorkObject
//WorkObject keeps a reference to Controller.
//When it is done getting information, it will send it to Controller to print
//generate a smaller list of addresses to work on, using e.Argument (should be 'i' from the above 'for' loop)
WorkObject.workingMethod()
}
When WorkObject is created, it uses “i” to know what thread number it is. It will use this to get a list of web addresses to get information from (from a larger list of addresses which is shared by the main Form, the Controller, and each of the WorkObjects – each thread will process a smaller list of addresses). As it iterates over the list, it will call the “getWebInfo” method.
//in “WorkObject” class
public static WebRequest request;
public void workingMethod()
{
//iterate over the small list of addresses. For each one,
getWebInfo(address)
//process the info a bit...then
myController.print()
//note that this isn’t a simple “for” loop, it involves event handlers and threading
//Timers to make sure one is done before going on to the next
}
public string getWebInfo (string address)
{
request = WebRequest.Create(address);
WebResponse response = request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream(), Encoding.UTF8);
string content = reader.ReadToEnd();
return content;
}

You should be performing all the web-work in your BackgroundWorker's DoWork event, are you?
Some clipped code may help us to understand what's going on.

In a similar situation I found I had better control by creating my own threads (no BackgroundWorker and no ThreadPool) and throttling the number of active connections. Write the IP addresses to a Queue and then pick each one off, passing it to a helper class where you call its DoSomething method on a new thread. Throttle by incrementing a counter when you launch a thread and decrement it when the thread completes. You can use callback routines from your helper class to signal your UI thread to update the interface or indicate a thread is finished.
Use your UI to vary the throttle limit and watch the Task Manager to see the effect on memory and CPU usage. You can add a maxconnection setting in your app.config to get more simultaneous connections.

I couldn't figure out why the background workers were causing hanging, but instead of using synchronous HTTP requests I switched to asynchronous using this guide:
http://www.developerfusion.com/code/4654/asynchronous-httpwebrequest/
Cleared up all the hanging. Thanks for the responses.

Related

Potential race conditions with ConcurrentBag and multithreaded application

I've been wrestling for the past few months with how to improve a process where I'm using a DispatcherTimer to periodically check resources to see if they need to be updated/processed. After updating the resource("Product"), move the Product to the next step in the process, etc. The resource may or may not be available immediately.
The reason I have been struggling is two-fold. One reason is that I want to implement this process asynchronously, since it is just synchronous at the moment. The second reason is that I have identified the area where my implementation is stuck and it seems like not an uncommon design pattern but I have no idea how to describe it succinctly, so I can't figure out how to get a useful answer from google.
A rather important note is that I am accessing these Products via direct USB connection, so I am using LibUsbDotNet to interface with the devices. I have made the USB connections asyncronous so I can connect to multiple Products at the same time and process an arbitrary number at once.
public Class Product
{
public bool IsSoftwareUpdated = false;
public bool IsProductInformationCorrect = false;
public bool IsEOLProcessingCompleted = false;
public Product(){}
~Product()
}
public class ProcessProduct
{
List<Product> bagOfProducts = new List<Product>(new Product[10]);
ConcurrentBag<Product> UnprocessedUnits = new ConcurrentBag<Product>();
ConcurrentBag<Product> CurrentlyUpdating = new ConcurrentBag<Product>();
ConcurrentBag<Product> CurrentlyVerifyingInfo = new ConcurrentBag<Product>();
ConcurrentBag<Product> FinishedProcessing = new ConcurrentBag<Product>();
DispatcherTimer _timer = new DispatcherTimer();
public ProcessProduct()
{
_timer.Tick += Timer_Tick; //Every 1 second, call Timer_Tick
_timer.Interval = new TimeSpan(0,0,1); //1 Second timer
bagOfProducts.ForEach(o => UnprocessedUnits.Add(o)); //Fill the UnprocessedUnits with all products
StartProcessing();
}
private void StartProcessing()
{
_timer.Start();
}
private void Timer_Tick(object sender, EventArgs e)
{
ProductOrganizationHandler();
foreach(Product prod in CurrentlyUpdating.ToList())
{
UpdateProcessHandler(prod); //Async function that uses await
}
foreach(Product prod in CurrentlyVerifyingInfo.ToList())
{
VerifyingInfoHandler(prod); //Async function that uses Await
}
if(FinishedProcessing.Count == bagOfProducts.Count)
{
_timer.Stop(); //If all items have finished processing, then stop the process
}
}
private void ProductOrganizationHandler()
{
//Take(read REMOVE) Product from each ConcurrentBag 1 by 1 and moves that item to the bag that it needs to go
//depending on which process step is finished
//(or puts it back in the same bag if that step was not finished).
//E.G, all items are moved from UnprocessUnits to CurrentlyUpdating or CurrentlyVerifying etc.
//If a product is finished updating, it is moved from CurrentlyUpdating to CurrentlyVerifying or FinishedProcessing
}
private async void UpdateProcessHandler(Product prod)
{
await Task.Delay(1000).ConfigureAwait(false);
//Does some actual work validating USB communication and then running through the USB update
}
private async void VerifyingInfoHandler(Product prod)
{
await Task.Delay(1000).ConfigureAwait(false);
//Does actual work here and communicates with the product via USB
}
}
Full Compile-ready code example available via my code on Pastebin.
So, my question really is this: Are there any meaningful race conditions in this code? Specifically, with the ProductOrganizationHandler() code and the looping through the ConcurrentBags in Timer_Tick() (since a new call to Timer_Tick() happens every second). I'm sure this code works the majority of the time, but I am afraid of a hard-to-track bug later on that happens because of a rare race condition when, say, ProductOrganizationHandler() takes > 1 sec to run for some dumb reason.
As a secondary note: Is this even the best design pattern for this type of process? C# is my first OOP language and all self-taught on the job (nearly all of my job is Embedded C) so I don't have any formal experience with OOP design patterns.
My main goal is to asynchronously Update/Verify/Communicate with each device as it becomes available via USB. Once all products in the list are finished (or a timeout), then the process finishes. This project is in .NET 5.
EDIT: For anyone that comes along later with the same question, here's what I did.
I did not understand that DispatcherTimer add Ticks to the Dispatcher queue. This implies that a tick will only run if there is not already another instance of Tick already running or, worded another way, Timer_Tick will run to completion before the next Timer_Tick instance runs.
So, most(all?) of the Threading/concurrency concerns I had were unfounded and I can treat the Timer_Tick as a single-threaded non-concurrent function (which it is).
Also, to keep Ticks from piling up, I ran _timer.Stop() at the beginning of Timer_Tick and restarted the timer at the end of Timer_Tick.
First of all, you are using DispatchTimer, this will raise ticks on the UI thread. So as far as I can see there is no multi threading going on in the example. There are other timers, like System.Timers.Timer that raises events on a background thread if that is the intent. But if you just want to check and update status every so often, and are not running any code that blocks, just using the UI thread is fine and will simplify things a lot.
Even if we assume ProductOrganizationHandler did run on a worker thread, it would still be generally safe to remove items from one concurrent collection and putting them in another. But it would not guarantee that items are processed in any particular order, nor that any specific item is processed by a given tick of the timer. But since the timer will tick periodically all the items should eventually be processed. Keep in mind that most timers need to be disposed, so you need to handle that somehow, including if the processing is stopped prematurely.
Keep in mind that async does not mean concurrent, so I would not use it unless your USB library provides async methods. Even then I would avoid async void since this promotes exceptions to the captured synchronization context, potentially crashing the application, so it should mostly be used in the outermost layer, like button event handlers, or timers, and then you should probably handle exceptions somehow.
As for the best way to do it, I would take a look at DataFlow library.

C# Slow UI Performance when calling BeginInvoke frequently

I have a main form called ProxyTesterForm, which has a child form ProxyScraperForm. When ProxyScraperForm scrapes a new proxy, ProxyTesterForm handles the event by testing the scraped proxy asynchronously, and after testing adds the proxy to a BindingList which is the datasource of a DataGridView.
Because I am adding to a databound list which was created on the UI thread I am calling BeginInvoke on the DataGridView so the update happens on the appropriate thread.
Without the BeginInvoke call in the method I will post below, I can drag the form around on my screen during processing and it doesn't stutter and is smooth. With the BeginInvoke call, it's doing the opposite.
I have a few ideas on how to fix it, but wanted to hear from smarter people than me here on SO so I solve this properly.
Use a semaphore slim to control the amount of simultaneous updates.
Add asynchronously processed items to a list outside of the scope of the the method I will post below, and iterate over that list in a Timer_Tick event handler, calling BeginInvoke for each item in the list every 1 second, then clearing that list and wash, rinse, repeat until the job is done.
Give up the convenience of data binding and go virtual mode.
Anything else someone might suggest here.
private void Site_ProxyScraped(object sender, Proxy proxy)
{
Task.Run(async () =>
{
proxy.IsValid = await proxy.TestValidityAsync(judges[0]);
proxiesDataGridView.BeginInvoke(new Action(() => { proxies.Add(proxy); }));
});
}
In Windows every thread that has UI has a message queue - this queue is used to send UI messages for the windows for this thread, those message include things like mouse moved, mouse up/down, etc.
Somewhere in every UI framework there is a loop that reads a message from the queue, processes it and then wait for the next message.
Some messages are lower priority, for example the mouse move message is generated only when the thread is ready to process it (because the mouse tends to move a lot)
BeginInvoke also uses this mechanism, it send a message telling the loop there's code it needs to run.
What you are doing is flooding the queue with your BeginInvoke message and not letting it handle UI events.
The standard solution is to limit the amount of BeginInvoke calls, for example, collect all the items you need to add and use one BeginInvoke call to add them all.
Or add in batches, if you make just one BeginInvoke call per second for all the objects found in this second you probably not effect the UI responsiveness and the user won't be able to tell the difference.
Note: For the actual answer on why this is happening, see #Nir's answer. This is only an explanation to overcome som problems and to give some directions. It's not flawless, but it was in line of the conversation by comments.
Just some quick proto type to add some separation of layers (minimal attempt):
//member field which contains all the actual data
List<Proxy> _proxies = new List<Proxy>();
//this is some trigger: it might be an ellapsed event of a timer or something
private void OnSomeTimerOrOtherTrigger()
{
UIupdate();
}
//just a helper function
private void UIupdate
{
var local = _proxies.ToList(); //ensure static encapsulation
proxiesDataGridView.BeginInvoke(new Action(() =>
{
//someway to add *new ones* to UI
//perform actions on local copy
}));
}
private void Site_ProxyScraped(object sender, Proxy proxy)
{
Task.Run(async () =>
{
proxy.IsValid = await proxy.TestValidityAsync(judges[0]);
//add to list
_proxies.Add(proxy);
});
}

Wait For Asynchronous Events to Finish in Foreach

How can I send a request to server using foreach and when i get response then for foreach wait for the returned information to process before continuing on to the next request.
My problem: in foreach I send many requests and foreach continues without process with information which i get in response.
For example:
foreach (DataLoader.GetInfo items in listBoxFetInfo.Items)
{
DownloadInfo(items.CompanyName);
}
and
void DownloadInfo(string name)
{
int requestId = feed.SendCompanyNameRequest(symbol.ToUpper(), sendType, sendTimePeriod, sendDateFrom, DateTime.Now);
}
and
feed.RequestCompanyName += new IFeedEvents_RequestCompanyNameEventHandler(feed_RequestName);
and
void feed_RequestName(int originalRequestId, short ResponseCode, string symbol, short symbolStatusCode, object records)
{
//save to file
}
I can't use Start Multiple Async Tasks and Process Them As They Complete because this solution need in add CancellationToken where i can not add (can not in void feed_RequestName
What another solution can be in this problem?
Also, i can not change signature of:
feed_RequestName(int originalRequestId, short ResponseCode, string symbol, short symbolStatusCode, object records)
and
feed.SendCompanyNameRequest(symbol.ToUpper(), sendType, sendTimePeriod, sendDateFrom, DateTime.Now);
I'm taking a stab in the dark here as to how your thing works, but this is what I would do. Since you can't modify the signatures, you can't use Task, like your last question suggested.
What I see here is that you have a method that begins a request and then an event that is called when the request is completed. I'm assuming your request is running on a separate thread or something inside the library where feed is from. What you need to do is wait for that event to complete once before proceeding to the next item. If this isn't what your question is asking or how your library works, let me know.
If you are not executing the requests in parallel (meaning only one is executing at a time and nobody else is using your feed object), you can use the System.Threading.ManualResetEvent class to make your one method depend on the other really easily. What you can do is the following:
In your DownloadInfo method, Reset the ManualResetEvent so that whenever somebody calls WaitOne, it will "block" the thread and prevent it from going on. It will wait until somebody else calls Set on the ManualResetEvent.
After calling the method that gets your requestId, call WaitOne on the ManualResetEvent. This will wait until "somebody else" calls Set on the ManualResetEvent.
In your feed_RequestName method, Set the manual reset event once you are done processing. This will make the WaitOne that you called in step 2 stop blocking and continue. If you store the results of the request in some variables on the object that these things live in before calling Set, you can access the results from your DownloadInfo method after the WaitOne.
A short, incomplete example:
class YourClassWhereYouPutTheseThings
{
private ManualResetEvent _mre = new ManualResetEvent(true);
//...your other stuff here...
public YourClassWhereYouPutTheseThings()
{
//...your existing stuff
feed.RequestCompanyName += new IFeedEvents_RequestCompanyNameEventHandler(feed_RequestName);
//...your existing stuff
}
void YourMethod()
{
foreach (DataLoader.GetInfo items in listBoxFetInfo.Items)
{
DownloadInfo(items.CompanyName);
}
}
void DownloadInfo(string name)
{
this._mre.Reset(); //make everyone listening to _mre wait until Set is called
int requestId = feed.SendCompanyNameRequest(symbol.ToUpper(), sendType, sendTimePeriod, sendDateFrom, DateTime.Now);
this._mre.WaitOne(); //wait for the feed_RequestName method to be called once
}
void feed_RequestName(int originalRequestId, short ResponseCode, string symbol, short symbolStatusCode, object records)
{
//save to file
//store your information on the object somewhere if you would like
//after saving to your file, set the manualresetevent so that the WaitOne() above can proceed
this._mre.Set();
}
}
Important note: If your feed is shared among multiple objects which could execute at the same time, you need to make sure that you don't interpet requests by other objects as the request that your DownloadInfo method made. Any request will cause your feed_RequestName method to be called, so you need to figure out which call of that method corresponds to your request. Here is one suggestion on how to make that work:
Store the requestId that you get in your DownloadInfo somewhere that feed_RequestName can access. This could be problematic if feed_RequestName gets called before your calling method has a chance to execute. You will need to figure out a way around this problem.
In feed_RequestName, compare against the stored request id and only call Set on the ManualResetEvent if it matches.
Another note: Your library is geared towards executing requests in parallel. Waiting for the request to finish before going to the next one is really inefficient. What you should really do is something like (psuedocode):
DownloadItems:
Lock the object so that only one person uses the method at a time
foreach item in requests
call your start request method and get the id
put the id into a thread-safe set stored on the object
wait for the set to become empty
feed_RequestName:
Do your thing that you've been doing
Remove the id from the thread safe set
Can you use while instead of foreach?
Like:
while(not done processing all requests) {
//grab and process next request
}

Multiple Threads

I post a lot here regarding multithreading, and the great stackoverflow community have helped me alot in understand multithreading.
All the examples I have seen online only deal with one thread.
My application is a scraper for an insurance company (family company ... all free of charge). Anyway, the user is able to select how many threads they want to run. So lets say for example the user wants the application to scrape 5 sites at one time, and then later in the day he choses 20 threads because his computer isn't doing anything else so it has the resources to spare.
Basically the application builds a list of say 1000 sites to scrape. A thread goes off and does that and updates the UI and builds the list.
When thats finished another thread is called to start the scraping. Depending on the number of threads the user has set to use it will create x number of threads.
Whats the best way to create these threads? Should I create 1000 threads in a list. And loop through them? If the user has set 5 threads to run, it will loop through 5 at a time.
I understand threading, but it's the application logic which is catching me out.
Any ideas or resources on the web that can help me out?
You could consider using a thread pool for that:
using System;
using System.Threading;
public class Example
{
public static void Main()
{
ThreadPool.SetMaxThreads(100, 10);
// Queue the task.
ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc));
Console.WriteLine("Main thread does some work, then sleeps.");
Thread.Sleep(1000);
Console.WriteLine("Main thread exits.");
}
// This thread procedure performs the task.
static void ThreadProc(Object stateInfo)
{
Console.WriteLine("Hello from the thread pool.");
}
}
This scraper, does it use a lot of CPU when its running?
If it does a lot of communication with these 1000 remote sites, downloading their pages, that may be taking more time than the actual analysis of the pages.
And how many CPU cores does your user have? If they have 2 (which is common these days) then beyond two simultaneous threads performing analysis, they aren't going to see any speed up.
So you probably need to "parallelize" the downloading of the pages. I doubt you need to do the same for the analysis of the pages.
Take a look into asynchronous IO, instead of explicit multi-threading. It lets you launch a bunch of downloads in parallel and then get called back when each one completes.
If you really just want the application, use something someone else already spent time developing and perfecting:
http://arachnode.net/
arachnode.net is a complete and comprehensive .NET web crawler for
downloading, indexing and storing
Internet content including e-mail
addresses, files, hyperlinks, images,
and Web pages.
Whether interested or involved in
screen scraping, data mining, text
mining, research or any other
application where a high-performance
crawling application is key to the
success of your endeavors,
arachnode.net provides the solution
you need for success.
If you also want to write one yourself because it's a fun thing to write (I wrote one not long ago, and yes, it is alot of fun ) then you can refer to this pdf provided by arachnode.net which really explains in detail the theory behind a good web crawler:
http://arachnode.net/media/Default.aspx?Sort=Downloads&PageIndex=1
Download the pdf entitled: "Crawling the Web" (second link from top). Scroll to Section 2.6 entitled: "2.6 Multi-threaded Crawlers". That's what I used to build my crawler, and I must say, I think it works quite well.
I think this example is basically what you need.
public class WebScraper
{
private readonly int totalThreads;
private readonly List<System.Threading.Thread> threads;
private readonly List<Exception> exceptions;
private readonly object locker = new object();
private volatile bool stop;
public WebScraper(int totalThreads)
{
this.totalThreads = totalThreads;
threads = new List<System.Threading.Thread>(totalThreads);
exceptions = new List<Exception>();
for (int i = 0; i < totalThreads; i++)
{
var thread = new System.Threading.Thread(Execute);
thread.IsBackground = true;
threads.Add(thread);
}
}
public void Start()
{
foreach (var thread in threads)
{
thread.Start();
}
}
public void Stop()
{
stop = true;
foreach (var thread in threads)
{
if (thread.IsAlive)
{
thread.Join();
}
}
}
private void Execute()
{
try
{
while (!stop)
{
// Scrap away!
}
}
catch (Exception ex)
{
lock (locker)
{
// You could have a thread checking this collection and
// reporting it as you see fit.
exceptions.Add(ex);
}
}
}
}
The basic logic is:
You have a single queue in which you put the URLs to scrape then you create your threads and use a queue object to which every thread has access. Let the threads start a loop:
lock the queue
check if there are items in the queue, if not, unlock queue and end thread
dequeue first item in the queue
unlock queue
process item
invoke an event that updates the UI (Remember to lock the UI Controller)
return to step 1
Just let the Threads do the "get stuff from the queue" part (pulling the jobs) instead of giving them the urls (pushing the jobs), that way you just say
YourThreadManager.StartThreads(numberOfThreadsTheUserWants);
and everything else happens automagically. See the other replies to find out how to create and manage the threads .
I solved a similar problem by creating a worker class that uses a callback to signal the main app that a worker is done. Then I create a queue of 1000 threads and then call a method that launches threads until the running thread limit is reached, keeping track of the active threads with a dictionary keyed by the thread's ManagedThreadId. As each thread completes, the callback removes its thread from the dictionary and calls the thread launcher.
If a connection is dropped or times out, the callback reinserts the thread back into the queue. Lock around the queue and the dictionary. I create threads vs using the thread pool because the overhead of creating a thread is insignificant compared to the connection time, and it allows me to have a lot more threads in flight. The callback also provides a convenient place with which to update the user interface, even allowing you to change the thread limit while it's running. I've had over 50 open connections at one time. Remember to increase your MacConnections property in your app.config (default is two).
I would use a queue and a condition variable and mutex, and start just the requested number of threads, for example, 5 or 20 (and not start 1,000).
Each thread blocks on the condition variable. When woken up, it dequeues the first item, unlocks the queue, works with the item, locks the queue and checks for more items. If the queue is empty, sleep on the condition variable. If not, unlock, work, repeat.
While the mutex is locked, it can also check if the user has requested the count of threads to be reduced. Just check if count > max_count, and if so, the thread terminates itself.
Any time you have more sites to queue, just lock the mutex and add them to the queue, then broadcast on the condition variable. Any threads that are not already working will wake up and take new work.
Any time the user increases the requested thread count, just start them up and they will lock the queue, check for work, and either sleep on the condition variable or get going.
Each thread will be continually pulling more work from the queue, or sleeping. You don't need more than 5 or 20.
Consider using the event-based asynchronous pattern (AsyncOperation and AsyncOperationManager Classes)
You might want to take a look at the ProcessQueue article on CodeProject.
Essentially, you'll want to create (and start) the number of threads that are appropriate, in your case that number comes from the user. Each of these threads should process a site, then find the next site needed to process. Even if you don't use the object itself (though it sounds like it would suit your purposes pretty well, though I'm obviously biased!) it should give you some good insight into how this sort of thing would be done.

Creating a Loop to Pause a Script While a Callback Function Operates

I am currently using a third party component to handle telnet connections in .NET. I want it to be synchronous where I send a command to the receiving telnet server and then I get the response back as text or byte array. Only problem is that the component is not set up to do that. The component allows me to send commands to the server, but the response is returned via a function handle. So in essence, I need a way to pause the application while the handler does it's processing. Here is an example of how I plan to get around that issue:
static void Main(string[] args)
{
Telnet telCon = new Telnet();
telCon.OnDataIn += new Telnet.OnDataInHandler(HandleDataIn);
telCon.Connect(remoteHostStr);
while (true) ;
}
public static void HandleDataIn(object sender, TelnetDataInEventArgs e)
{
string responseStr = e.Text;
if (responseStr.Contains("Username:"))
{
((Telnet)sender).Send(System.Text.ASCIIEncoding.ASCII.GetBytes(username));
}
else if (responseStr.Contains("Password:"))
{
((Telnet)sender).Send(System.Text.ASCIIEncoding.ASCII.GetBytes(password));
}
}
The solution above will not work since the while will always run, but I will probably build a future version that uses some sort of global variable to track if the loop still needs to run. However, everything I have been taught about programming says this is very dirty. Can anyone think of another way around my dilemma?
Thanks,
Chris
Here is an example of using a ManualResetEvent to suspend execution (and delay program end) until your event handler says it's finished.
static ManualResetEvent finishGate;
static void Main(string[] args)
{
finishGate = new ManualResetEvent(false); // initial state unsignaled
Telnet telCon = new Telnet();
telCon.OnDataIn += new Telnet.OnDataInHandler(HandleDataIn);
telCon.Connect(remoteHostStr);
finishGate.WaitOne(); // waits until the gate is signaled
}
public static void HandleDataIn(object sender, TelnetDataInEventArgs e)
{
// handle event
if (processingComplete)
finishGate.Set(); // signals the gate
}
The WaitOne() method of ManualResetEvent also includes overrides that accept a timespan or number of milliseconds. It returns bool - true if it was signaled, false if it timed out. If you put that in a loop, you could have your main thread wake up every 30 seconds and perform some housekeeping tasks, but still have an instantaneous response when the gate is signaled.
Your while loop:
while(true) ;
will drive CPU usage to 100% (well, 100% of 1 core on a multicore machine) and leave it there, permanently.
This will starve other processes of CPU power, and may prevent the Telnet component from working at all because you've bypassed the message pump.
There are better ways, but without more information on what you're doing, it will be hard to advise you.
To begin, do you want a WindowsForms/WPF/Console application?
[And please, use comments to answer, not Answers.]
In general, when you really need to wait, use a WaitHandle. In this case, a ManualResetEvent would probably be what you need.
A better way would be to spawn the Telnet processing to another thread. That way you can get the main thread to wait for the telnet processing to complete.
Have a look here for some very good tutorials on threading.

Categories