Add a realtime progress update to a slow page in asp.net - c#

I'm trying to add a realtime progress report to my c#/asp.net 4.0 application for a slow loading page. I've look at the UpdatePanel and UpdateProgress Ajax controls but I don't think they're suitable.
Basically when the user click a button the page executes a number of tasks, I'd like the user to see an update as each one completes, instead of a report when they all complete and the page load completes.
The order thing would happen would be:
1. user click button to start
2. call method 1
3. when method 1 completes, user see "Method 1 done"
3. call method 2
etc.
Can anyone help with this?

This sort of asynchronous execution can be difficult to implement. A few solutions off the top of my head:
Totally asynchronous without AJAX:
Use hits button, submits page.
Server generates a GUID for the task, and creates a record in your database. This might include:
Guid (ID)
Status flag/enum
start time.
Server spawns a thread to handle the task and passes in the Guid.
Server returns the GUID, along with a "Working..." message
After n seconds/milliseconds/insert-time-span-here, browser posts the page again, including a "GetStatus" command and the GUID.
Server checks the status flag in the database based on the GUID.
Server returns a status message based on the DB record ("Step 2...", "Still working", or whatever is appropriate)
Loop to step (5) until the status returned from the server indicates that the process is complete.
In the thread created in step (3):
Thread start
Read current status from the DB record
Execute the next step based on that status
Update the DB status to indicate that it's ready to do the next step, or set an error flag.
Sleep for a few milliseconds to keep from blocking the app (might be unnecessary - I'm not sure how threads interact under IIS)
Loop to (2) until everything's done.
Thread exits.
Here's an example of easily creating a thread with a lambda.
(new Thread(
() => {
DoLongRunningWork();
}
) { Name = "Long Running Work Thread"
,
Priority = ThreadPriority.BelowNormal
}).Start();
Synchronous
Easier, but might cause some performance problems:
User submits "Start" form.
Server writes "Starting..." to the response stream and flushes the stream. This should get the text back to the client, I think, but I haven't tried it in years.
Server executes first step.
Server writes status to the response stream and flushes.
Loop to step (3) until complete.
Effectively, the page keeps the connection open until the task is complete, and flushing the output periodically keeps the client from timing out. This may have problems with timeouts, etc, and your server configuration (output buffering, etc) might be an issue.
Background Task
Similar to the first asynchronous approach:
User clicks "start"
Server adds a row to the DB that identifies the task to be executed, retrieves the ID, and returns it to the client.
Create a scheduled task (script, windows service, etc) that polls the table, executes the desired tasks and updates the status as it progresses.
Client re-posts the form with the DB ID periodically. Server checks the ID against the DB and returns a message about the status (may include info about previous steps such as exec time, ETA, etc)
Client Loops to (4) until the task is complete or errors out.
The difference between this and the first approach is that the thread lives in a separate process instead of IIS.
Each approach has its issues, of course, and there may be a simpler way to do this.

Related

Migrating polling to SignalR without polling on the server?

I have a piece of functionality in my web application that kicks off a long running report on my server. Currently, I have an ajax method on my client that is constantly running to check if the currently running report is complete. It does this by repeatedly calling a method that queries the database to determine if a given report's status has changed from Running to either Error or Complete. Once the report is finished, I perform an ajax load to get the data.
I'm looking at implementing SignalR to add some additional functionality to other pages, and I figured this would be a good test case to get things going. My main concern is how to alert the client when the report is complete. Using SignalR, I can simply say something like:
public class ReportHub : Hub
{
public async Task ReportComplete(string userId, ReportRunStatus guid)
{
await Clients.User(userId).SendAsync("ReportComplete", guid);
}
}
However, I want to try to avoid putting a long running loop on the server as I'm afraid this could degrade performance as operations scale up. Is there a better way to handle checking the report status and alerting clients than simply polling until completion? Or is there some easy way to constantly be looking at the table and alerting on completed reports?

Calling different API with different interval time , saved in my database table

I have a list of API from different client saved in my Database table and all the API have different time interval for there API to be called. What should be my approach to call the API. New data may be added in the List of API table . Should I go for Dynamic Timers?
I have an application (GUI) which clients use to add new records.
These records represent an API url and the time (Schedule) at which that API should be called.
Your Challenge is to write code that is able to call all the Client specified API's at the specified schedule/time.
To me - API calling & Handling the responses (Storing into DB etc) should be one component. and, scheduling when to call which API - should be other component (Something like cron job). This way - when the time is right appropriate API call would be triggered. This also gives you flexibility to do multiple tries/retries in a day etc.
Update after your comment:
You have an application (GUI) which clients use to add new records.
These records represent an API url and the time (Schedule) at which that API should be called.
Your Challenge is to write code that is able to call all the Client specified API's at the specified schedule/time.
If I have got that problem right - my original suggestion stands.
Component 1 - Scheduler
Use Quartz.net (or create your own using a Timer etc) - and create a Service (say WCF) or Process which will read records from Database and identify all the schedules and the API urls that need to be called. When the scheduled time happens Quartz.net will trigger your handler method - where you will make a Call to Component 2 and pass on the API url.
Component 2 - API Engine
When it receives a call from Component 1 - it will make the API call and fetch the response. Store/process it as required.
There are various schedulers that can be used to do this automatically. For example, you could use Quartz.NET and its AdoJobStore. I haven't used that myself, but it sounds appropriate:
With the use of the included AdoJobStore, all Jobs and Triggers configured as "non-volatile" are stored in a relational database via ADO.NET.
Alternatively, your database may well have timers built into it. However, if this is primarily an academic exercise (as suggested by "your challenge") you may not be able to use these.
I would keep a table of scheduled tasks, with columns specifying:
When the task should next be run
What the task should do
How to work out the next iteration of that task afterwards
If the task has been started, when it was started
If the task completed, when it completed
You can then write code in an infinite loop to just scan that table, e.g. once per minute. It should look for all tasks with a "next time" earlier than now that haven't completed:
If the task hasn't been started, update the row to show that it has been started (now), and start executing the task
If the task was started recently, ignore it
If the task was started "a long time ago" (i.e. longer than it would take to run successfully), either mark it as "broken" somehow, or restart
When a task completes successfully, update the row to indicate that it's finished, and add another row for the next time it should be started.
You'll need to work out exactly what your error strategy is:
How long should the gap be between a task starting and you deciding it's failed?
Do you always want to restart the task, or should some failures be permanent?
Do you need to record how often a task failed, and give up after a certain number of tries?
What do you do if you explicitly notice that the task has failed while you're executing it? (Rather than just by the fact that it was started a long time ago.)
For extra reliability, you'd need to think of other aspects too:
Do you need multiple task runners?
How can you spot when a task runner has failed, and restart that?
How do you deal with multiple task runners trying to start the same task at the same time?
You may not need to actually implement everything here, but it's worth considering them.

C# WebAPI 2 Async method so user doesnt have to wait

I have the following WebAPI 2 method:
public HttpResponseMessage ProcessData([FromBody]ProcessDataRequestModel model)
{
var response = new JsonResponse();
if (model != null)
{
// checks if there are old records to process
var records = _utilityRepo.GetOldProcesses(model.ProcessUid);
if (records.Count > 0)
{
// there is an active process
// insert the new process
_utilityRepo.InsertNewProcess(records[0].ProcessUid);
response.message = "Process added to ProcessUid: " + records[0].ProcessUid.ToString();
}
else
{
// if this is a new process then do adjustments rules
var settings = _utilityRepo.GetSettings(model.Uid);
// create a new process
var newUid = Guid.NewGuid();
// if its a new adjustment
if (records.AdjustmentUid == null)
{
records.AdjustmentUid = Guid.NewGuid();
// create new Adjustment information
_utilityRepo.CreateNewAdjustment(records.AdjustmentUid.Value);
}
// if adjustment created
if (_utilityRepo.CreateNewProcess(newUid))
{
// insert the new body
_utilityRepo.InsertNewBody(newUid, model.Body, true);
}
// start AWS lambda function timer
_utilityRepo.AWSStartTimer();
response.message = "Process created";
}
response.success = true;
response.data = null;
}
return Request.CreateResponse(response);
}
The above method sometimes can take from 3-4 seconds to process (some db calls and other calculations) and I don't want the user to wait until all the executions are done.
I would like the user hit the web api method and almost inmediatly get a success response, meanwhile the server is finishing all the executions.
Any clue on how to implement Async / Await to achieve this?
If you don't need to return a meaningful response it's a piece of cake. Wrap your method body in a lambda you pass to Task.Run (which returns a Task). No need to use await or async. You just don't await the Task and the endpoint will return immediately.
However if you need to return a response that depends on the outcome of the operation, you'll need some kind of reporting mechanism in place, SignalR for example.
Edit: Based on the comments to the original post, my recommendation would be to wrap the code in await Task.Run(()=>...), i.e., indeed await it before returning. That will allow the long-ish process to run on a different thread asynchronously, but the response will still await the outcome rather than leaving the user in the dark about whether it finished (since you have no control over the UI). You'd have to test it though to see if there's really any performance benefit from doing this. I'm skeptical it'll make much difference.
2020-02-14 Edit:
Hooray, my answer's votes are no longer in the negative! I figured having had the benefit of two more years of experience I would share some new observations on this topic.
There's no question that asynchronous background operations running in a web server is a complex topic. But as with most things, there's a naive way of doing it, a "good enough for 99% of cases" way of doing it, and a "people will die (or worse, get sued) if we do it wrong" way of doing it. Things need to be put in perspective.
My original answer may have been a little naive, but to be fair the OP was talking about an API that was only taking a few seconds to finish, and all he wanted to do was save the user from having to wait for it to return. I also noted that the user would not get any report of progress or completion if it is done this way. If it were me, I'd say the user should suck it up for that short of a time. Alternatively, there's nothing that says the client has to wait for the API response before returning control to the user.
But regardless, if you really want to get that 200 right away JUST to acknowledge that the task was initiated successfully, then I still maintain that a simple Task.Run(()=>...) without the await is probably fine in this case. Unless there are truly severe consequences to the user not knowing the API failed, on the off chance that the app pool was recycled or the server restarted during those exact 4 seconds between the API return and its true completion, the user will just be ignorant of the failure and will presumably find out next time they go into the application. Just make sure that your DB operations are transactional so you don't end up in a partial success situation.
Then there's the "good enough for 99% of cases" way, which is what I do in my application. I have a "Job" system which is asynchronous, but not reentrant. When a job is initiated, we do a Task.Run and begin to execute it. The code in the task always holds onto a Job data structure whose ID is returned immediately by the API. The code in the task periodically updates the Job data with status, which is also saved to a database, and checks to see if the Job was cancelled by the user, in which case it wraps up immediately and the DB transaction is rolled back. The user cancels by calling another API which updates said Job object in the database to indicate it should be cancelled. A separate infinite loop periodically polls the job database server side and updates the in-memory Job objects used by the actual running code with any cancellation requests. Fundamentally it's just like any CancellationToken in .NET but it just works via a database and API calls. The front end can periodically poll the server for job status using the ID, or better yet, if they have WebSockets the server pushes job updates using SignalR.
So, what happens if the app domain is lost during the job? Well, first off, every job runs in a single DB transaction, so if it doesn't complete the DB rolls back. Second, when the ASP.NET app restarts, one of the first things it does is check for any jobs that are still marked as running in the DB. These are the zombies that died upon app pool restart but the DB still thinks they're alive. So we mark them as KIA, and send the user an email indicating their job failed and needs to be rerun. Sometimes it causes inconvenience and a puzzled user from time to time, but it works fine 99% of the time. Theoretically, we could even automatically restart the job on server startup if we wanted to, but we feel it's better to make that a manual process for a number of case-specific reasons.
Finally, there's the "people will die (or worse, get sued) if we get it wrong" way. This is what some of the other comments are more directed to. This is where have to break down all jobs into small atomic transactions that are tracked in a database at every step, and which can be picked up by any server (the same or maybe another server in a farm) at any time. If it's really top notch, multiple servers can even work on the same job concurrently, depending on what it is. It requires carefully coding every background operation with this in mind, constantly updating a database with your progress, dealing with concurrent changes to the database (because now the entire operation is no longer a single atomic transaction), etc. Needless to say, it's a LOT of effort. Yeah, it would be great if it worked this way. It would be great if every app did everything to this level of perfection. I also want a toilet made out of solid gold, but it's just not in the cards now is it?
So my $0.02 is, again, let's have some perspective. Do the cost benefit analysis and unless you're doing something where lives or lots of money is at stake, aim for what works perfectly well 99%+ of the time and only causes minor inconvenience when it doesn't work perfectly.

SSAS Tabular AMO - how to know when a requested refresh is complete

The partition class in the tabular AMO library has a method for refreshing the partition (RequestRefresh). I can use the AMO library to fire this off, however this method appears to be asynchronous and I cannot find a way of monitoring this request to know when the processing has completed (either refreshed or failed).
The partition class does have a "State" property, but in practice, this always appears to report as ready, even during processing or after a failure in refreshing the data that's caused no data to be written into the partition.
I need to be able to programatically refresh my cube partitions, but have tasks that I need to schedule after the build has completed, I could watch the refresh time, but that feels like the wrong way to do this and any failed attempts do not appear to change this value (therefore requiring some form of timeout or other method for detecting failed refreshes).
Please add the following line after RequestRefresh. SaveChanges is synchronous and the refresh operation isn't actually executed until SaveChanges is run:
partition.RequestRefresh(RefreshType.Full);
db.Model.SaveChanges();

Execute a stored procedure from a windows form asynchronously and then disconnect?

I am calling a stored procedure from my application that can take 30 minutes to execute.
I don't want to make my user leave the application open for that entire time period. So I would like to call the sproc, let it fly, and let them shut down the application and come back later.
How can I do this?
This is actually a quite common scenario. You cannot do anything client based because the client may go away and disconnect and you'll lose the work achieved so far. The solution is to use Service Broker Activation: you create a service in the database and attach an activated procedure. In your application (or ASP page) you send a message to the service and embed the necessary parameters for your procedure. After your application commits, the message activates the service procedure. the service procedure reads the parameters from the message and invokes your procedure. since activation happens on a server thread unrelated to your original connection, this is reliable. In fact the server can even shutdown and restart while your procedure is being executed and the work will be rolled back then resumed, since the activating message will trigger again the service procedure after the restart.
Update
I have published the details of how to do this including sample code on my blog: Asynchronous procedure execution.
You can use the BeginExecuteXXX/EndExecuteXXX methods (depending whether it returns a result or not) of the SqlCommand, passing a callback delegate.
I suggest a re-architecture. Create a "work queue" table where you log requests to run the stored procedure. Then either have a Windows Service or a SQL Server job check that work queue from time to time (or be really ingenious and use a trigger) to kick off the stored procedure. Have the stored procedure update the progress from time to time in the work queue table, and your front-end can look at that an tell the user the progress, and then display the results when they're done.
If you really want to close down your application completely, I suggest you define a job in SQL Server Agent, and just execute a T-SQL statement to start that job manually. The syntax is:
sp_start_job
{ [#job_name =] 'job_name'
| [#job_id =] job_id }
[ , [#error_flag =] error_flag]
[ , [#server_name =] 'server_name']
[ , [#step_name =] 'step_name']
[ , [#output_flag =] output_flag]
The job would execute your stored procedure. You will have to be a little creative to pass in any arguments. For example, insert the parameters into a "queue" table and have the job process all the rows in the queue.
Instead of a job, an insert trigger on your queue should work as well.
I prefer to use a background service for offline processing, where your user app tells the service what to do and then disconnects. The service can log elapsed times and errors/status, and restart if necessary. WCF is designed for this and supports queues to communicate with.
let them shut down the app and come
back later
If you're going to allow them to completely close the app, you'll have to start up a seperate .exe or something in a different ThreadPool that executes your code calling the stored procedure. Otherwise your thread will die when you close the app.
Another method that you could do would be to allow your application to run in the background (possibly in the notification area) and then exit or notify when the job completes. You could use this by using the BeginExecuteNonQuery and EndExecuteNonQuery methods to allow it to run in a separate thread.
Your application's main window does not need to be open. If you launched it as a secondary thread, it will continue to run so long as IsBackground == false. I usually prefer to do this stuff through SQL Server Agent or as a client-server application (nothing prevents a client-server app from both running on the same machine, or even being the same binary).
It's been a while...
using System.Threading;
.....
Thread _t = null;
void StartProcedure()
{
_t = new Thread(new ThreadStart(this.StartProc));
_t.IsBackground = false;//If I remember correctly, this is the default value.
_t.Start();
}
bool ProcedureIsRunning
{
get { return _t.IsRunning; } //Maybe it's IsActive. Can't remember.
}
void StartProc(object param)
{
//your logic here.. could also do this as an anonymous method. Broke it out to keep it simple.
}

Categories