Starting, Stopping A Thread - c#

I am not really sure how thread works.
Here is my code. Upon clicking a send button:
protected void BtnSend_Click(object sender, EventArgs e)
{
Thread threadA = new Thread(SendSMS);
threadA.Start();
}
protected void SendSMS()
{
//some validations here
Thread threadB = new Thread(loadingScreen);
threadB.Start();
threadB.Join();
//code that actually sends the required Mail
threadB.Stop();
loading.Visible = false;
}
threadB is calling this method which is basically a div (called loading) with a loading div that disables user from pressing anything on screen:
protected void loadingScreen()
{
loading.Visible = true;
}
Now the mail is being sent but the loading screen (div) is not becoming visible.
What am I doing wrong?

You have to rethink when you're writing ASP.NET vs. a rich client application. In short (really really short) the web browser (client) sends a request to the server. The server handles that request (that part is your code behind), and returns a result to the web browser.
When you show a DIV in your codebehind, do some work, then hide it again, only the result will arrive at the web browser.
There are multiple ways to achieve the optical effect you want, but you must know about the Life Cycle of ASP.NET first. Start here, for example.

I think you want a responseable application while you compute a huge task.
In WinForms you have to be careful because if you want to change some UI like a Text in a Label you have to synchronize both Threads. (UI-Thread and Thread1)
If you are running .NET 4.0 you should use the Task-Class, because there you don't need to synchronize and you can also use anonymous methods.
protected void SendSMS()
{
loading.Visible = true;
var task = Task.Factory.StartNew(()=>{//code that actually sends the required Mail}
task.Wait();
loading.Visible = false;
}

Actually the loading Gets visible and then hidden quickly. Join returns immediately as soon as it enabled the Div and then the email is sent, the Div is disabled again. Sending email and disabling happens in same thread.

Why do you use threadB? You can do operation only with threadA:
protected void SendSMS()
{
//some validations here
loading.Visible = false;
//code that actually sends the required Mail
loading.Visible = false;
}
Warning for crossthread operation exception.

Related

C# Trouble with event handlers on dieing threads

First of all my Main is STAThread and i am not able to change this without facing problems with the rest of my code.
So, I am currently using Rapi2 To pull and push files between my Pda and Computer. Now since there is quite a bit of number crunching i would like to do this on a separate thread. First wat i do is create an RemoteDeviceManager and then make an Event Handler for when a device connects.
public void Initialize()
{
_deviceManager = new RemoteDeviceManager();
_deviceManager.DeviceConnected += DeviceConnected;
}
As you can see when my device connects it triggers DeviceConnected.
This is the class that i end up pulling and pushing a database and do some number work.
private void DeviceConnected(object sender, RemoteDeviceConnectEventArgs e)
{
if (e.Device == null) return;
... (unimportant code)
}
Now the problem here is that i would want to run the code inside DeviceConnected in a new thread but i am unable to access e inside the new thread since it was initialized outside that thread
So now wat i tried was make a new thread before calling Initialize.
public Watcher()
{
_dataThread = new Thread(Initialize);
_dataThread.IsBackground = true;
_dataThread.Name = "Data Thread";
_dataThread.SetApartmentState(ApartmentState.MTA);
_dataThread.Start();
}
But the thread dies and thus never fires my event handler.
I tried many different ways to make it work or keep my thread alive but without any success. I hope someone here is able to give me some hints.

.NET stop waiting for a database event which does not arrive

I'm working on a really big project developed by two teams, one (mainly) for the database, and one (where I am) mainly for the GUI and helper classes as an interface between GUI and DB.
Obviously, there are errors in communication, and - of course - we can't assume 100Mbit bandwidth & super-fast server computer.
Language is C# .NET, target "framework" is WPF and Silverlight.
When a user clicks a button, the GUI asks the DB (through helper classes) for information. Let's say... pizza types. The server should answer "{Funghi,Frutti di mare,Prosciutto}". When DB sends his answer, we receive a "database.Ready" event and fill our datagrid.
BUT if the user clicks the button while we haven't received the answer yet, the GUI sends an another request to the database. And the whole system tries to serve the user.
But it can't, because when the second request is sent, the first is disposed when we want to read it. So NullReferenceExceptions occur.
I've solved this by implementing kind of a semaphore which closes when user input occurs and opens when the Ready event (the functions it calls) finishes working.
Problem:
If I don't receive the Ready event, no user input is allowed, but this is wrong.
Question:
Is there a common (or at least, working) solution to stop waiting for the Ready event and...
1) re-sending the request a few times, hoping we receive our pizza types?
AND/OR
2) Dropping the request, tell the user that database failed to send the answer, and re-open the semaphore?
I can't post code here as this code is the property of a corporation, I'd rather like to have theoretical solutions, which are okay for professionals too.
Sorry for the long post, and thank you for your answers!
I assume that you are already using a background thread to dispatch the query to the database and wait for it's response. You can use the Task API that was introduced in .NET 4.0 to cancel such a request. For that, you pass in a CancellationToken that signals the status to the executing task. You can obtain a CancellationToken via a CancellationTokenSource as shown in the following code:
public partial class MainWindow : Window
{
private readonly CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
public MainWindow()
{
InitializeComponent();
}
private void Button_CallDatabase(object sender, RoutedEventArgs e)
{
Task.Factory.StartNew(CallDatabase, _cancellationTokenSource.Token);
}
private void Button_OnNavigate(object sender, RoutedEventArgs e)
{
// If you navigate, you can cancel the background task and thus
// it will not execute any further
_cancellationTokenSource.Cancel();
}
private void CallDatabase()
{
// This simulates a DB call
for (var i = 0; i < 50; i++)
{
Thread.Sleep(100);
}
// Check if cancellation was requested
if (_cancellationTokenSource.Token.IsCancellationRequested)
{
Debug.WriteLine("Request cancelled");
return;
}
Debug.WriteLine("Update Controls with DB infos.");
}
}
Note that this example is simplified, you can and should use this in another component (e.g. view model).
If you still want to use the Ready event, you could also just unregister from it when you navigate away, so that no further actions will be performed when it is raised.

FiddlerCore session timeout

I'm handling local requests by using FiddlerCore. All sessions are queued in Queue<Session> and processed by a BackgroundWorker. After the process is done, I would like to send a response, indicating the success or failure of the processing, by using the processed session. The problem is that I'm getting the Too late, we're already talking to the server error.
This is the FiddlerCore function:
private static void FiddlerApplication_BeforeRequest(Session session)
{
if (session.hostname.ToLower() == "localhost")
{
LogHelper.WriteFormat("Local request {0} enqueued", session.id);
sessionsQueue.Enqueue(session);
if (!sessionWorker.IsBusy)
sessionWorker.RunWorkerAsync();
}
}
This is the thread function:
private static void sessionWorker_DoWork(object sender, DoWorkEventArgs e)
{
while (sessionsQueue.Count > 0)
{
if (sessionWorker.CancellationPending)
{
e.Cancel = true;
sessionsQueue.Clear();
LogHelper.Write("Shutting down, all requests canceled");
break;
}
currentSession = sessionsQueue.Dequeue();
LogHelper.WriteFormat("Processing request ID {0}", currentSession.id);
ProcessSession();
}
}
This is the code at the end of the ProcessSession function:
{
...
currentSession.bBufferResponse = true;
currentSession.utilCreateResponseAndBypassServer();
currentSession.oResponse.headers.HTTPResponseStatus = "200 OK";
currentSession.oResponse["Content-Type"] = "text/html; charset=UTF-8";
currentSession.oResponse["Cache-Control"] = "private, max-age=0";
currentSession.utilSetResponseBody(responseBody);
}
I've tried to tamper with the session's timers and state, but without success.
The exception in question occurs when you call utilCreateResponseAndBypassServer after having already sent the request to the server. If you want to use the utilCreateResponseAndBypassServer method, you must do so inside the BeforeRequest handler, not at some future time. Similarly, setting bBufferResponse after having already connected to the server is pointless.
Based on your later remarks, you have a misunderstanding about how threading works with FiddlerCore. FiddlerCore processes Sessions on background threadpool threads.
When BeforeRequest fires, your code has a chance to run. If you call utilCreateResponseAndBypassServer inside that method, then the response you generate is immediately returned to the client.
If you don't call utilCreateResponseAndBypassServer inside BeforeRequest, the request is immediately sent to the server by FiddlerCore, and the response is returned to the client when it's available.
To achieve what you're describing, you should NOT try to do your own threading-- instead do all of your work on the thread that goes into the BeforeRequest method and don't leave that method without generating the desired response. You don't need to worry about hanging the UI or anything like that, because BeforeRequest is running on a background thread. The only thing you must do is the Invoke methods if any of your code needs to interact with any UI owned by the UI thread.

Using LocalMessageSender synchronously

We have an application that has a primary window, it can launch multiple other windows, in new browsers. We are using a silverlight application as a coordinating server in the primary window to close all windows that are part of the app, regardless of the way they are opened (we can't guarantee it was via window.open so don't always have a handle to the window in javascript).
On log out, we want to signal all the other windows to perform an auto-save, if necessary, then close down.
So all windows have a silverlight app, they coordinate using localmessagesenders. However, these are asynchronous:
private void ProcessAutosave()
{
foreach (string s in _windows)
{
SendMessage(s, "notify-logout");
}
// code here quoted later...
}
// sendasynch doesn't send until the method terminates, so have to do it in it's own function.
private void SendMessage(string to, string message)
{
var lms = new LocalMessageSender(to);
lms.SendCompleted += new EventHandler<SendCompletedEventArgs>(SenderSendCompleted);
lms.SendAsync(message);
}
Since the ProcessAutosave is called from a javascript onunload event which can't be cancelled, we need this to be synchronous and not complete before we have a response processed from each sub-window so the session state will still be valid etc.
In the SenderSendCompleted we remove items from _windows when they have said they're done.
So I added a loop on the end:
while(_windows.Count > 0) {
Thread.Sleep(1)
}
However, that never terminates, unless I put an iteration counter on it.
Am I the victim of a compiler optimisation meaning the changes in SenderSendCompleted do not affect that while loop, or, have I fundamentally misunderstood something? Or missed something obvious that's staring me in the face?
It sounds like a subtle verson of a race situation due to going sync/async. Couldn't the process in queston also receive notifications from the windows that they have received the message and are shutting down? Once all of the counter messages have been received, then the main app could shut down without the busy wait at the end(?).
I have found a way to work round. However, this does not really "solve" the problem generally, just in my case, which is also only supporting internet explorer.
function WindowCloseEventHandler()
{
var app = // get silverlight app handle...
app.doAutoSave();
var params = 'whatever you need';
var args = new Object();
args.hwnd = window;
window.showModalDialog('blocker.aspx',args,params);
}
function checkAutoSave()
{
var app = // get silverlight app handle...
return app.autosavecomplete();
}
Then in blocker.aspx we display a static "performing logout handlers" type message and do:
function timerTick()
{
if(window.dialogArguments.hwnd.checkAutoSave()) {
window.close();
} else {
setTimeout(timerTick, 500);
}
}
And start the timer on window load.
The child window's silverlight apps are notified to start an autosave, then they notify the parent when they are done. We then poll the parent's status from a modal dialog, which blocks the termination of the WindowCloseEventHandler() which we have wired up to the onclose event of the body.
It's hacky and horrible, but it means silverlight stays asynchronous and we're using a javascript timer so the javascript isn't loading the system.
Of course if the user closes the modal dialogue, there is a potential for issue.

How to get feedback to a page from a running thread in ASP.NET?

So I have this interface that is just one big GO button that syncs a bunch of data from one tool to another. The problem is it takes a really long freaking time and some users are left wondering whats the deal. So I am wondering if there is a way that I can put something in my loop so that every so many entries it sends something back to the page to update them on the progress.
Currently it is just an .aspx page with an aspx.cs behind it. The Go button fires off the whole process and it calls Response.Write a ton of times (as well as writing the same thing to a log file I made) but the Responses don't show until the entire thing is done.
Please advise.
You could design a class which will be stored in the session and which will represent the current state of the operation:
public class OperationState
{
public object Result { get; set; }
public int Progress { get; set; }
public string Error { get; set; }
}
An instance of this class could be created when you start the operation and store it in the user session. Then at each step of the operation you could retrieve it from session and update the progress property. Once the operation terminates you could set the Result property or the Error property in case an exception occurs. In the meantime you could design a PageMethod which will be accessible from client script. This method will simply return the State instance from the session. You will then invoke it periodically and asynchronously from javascript to check the progress and update the DOM to notify the user.
I am assuming you are calling another class to do the work. Lets call this the WorkerClass
You can have the WorkerClass have an event hooked up to it, that the .aspx page hooks up too and will write a message when the event is triggered.
// Overload EventArgs to send messageas back up
public delegate void UpdateMethod(object sender, EventArgs e);
public class WorkerClass
{
public event UpdateMethod UpdateMethod;
}
WorkerClass worker = new WorkerClass();
worker.UpdateMethod += new UpdateMethod(worker_UpdateMethod);
EDIT based on Comment it is on there page
If you don't want to refactor to another class doing the work (which I suggest). You can post the messages this way.
protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
this.ProcessMassiveWorkLoad();
}
private void ProcessMassiveWorkLoad()
{
for(int i = 0; i < 100000; i++)
{
// Do some work
// Write the fact you have work
Response.Write(string.Format("Done {0} of 100000", i);
}
}
The simplest way to resolve your issue is to call Response.Flush() after each Response.Write.
This will flush the current response buffer back to the client, enabling them to see the current state of the page.
Even David's method would need this to get the responses out to the user in a timely manner.
The better solution would be along the lines of Darin's solution, which would involve some client side scripting of (say) an update panel, that you refresh with a JavaScript timer to get the latest state, but that may introduce other issues for you (needing JavaScript turned on, rewriting the long running method as something you can fire off asynchronously, etc).
If it's any consolation, I've done both in the past, and would use either again.

Categories