WP7 vs PriorityThreadPool & Network Action - c#

I’m building a small WP7 app that need to access/update several resource over the web. I’m looking to build a PriorityThreadPool object with some cancellation feature to help me running “Action” on several Thread on the background. Well the custom thing download in priority what the user is seeing then download the rest but if the user update the visual then change the priority and make those item appear upper in the propriety list of the pool.
Let’s say I’m implementing an action responsible to download an Image from a web server would you try to make the Async call sync or will you just leave it as is, please take in consideration that I may run 100 action that download 100 different image. Perhaps If I do not make the call sync It will be pretty difficult to cancel an action since they will all run pretty fast in the thread pool. I guess that under the hood there some sort of thread pool for the network connectivity on WP7
Any comments or suggestion.

Rather than try and (re?)create a "PriorityThreadPool" I'd create an object which manages multiple queues which you can adjust the priority of as necessary.
This could then process each queue depending upon priority.
When processing the queue, only issue a few requests at once and start the next when one finishes.
You could do the processing on the ThreadPool or by creating a BackgroundWorker if you want greater control over being able to cancel requests.
Within each request you may want to process it as a synchronous operation as it will make the logic simpler but will make cancelling things harder.

Related

Custom Command Windows Services on HIGH Priority

I have an Work Tracker WPF application which deployed in Windows Server 2008 and this Tracker application is communicating with (Tracker)windows service VIA WCF Service.
User can create any work entry/edit/add/delete/Cancel any work entry from Worker Tracker GUI application. Internally it will send a request to the Windows service. Windows Service will get the work request and process it in multithreading. Each workrequest entry will actually create n number of work files (based on work priority) in a output folder location.
So each work request will take to complete the work addition process.
Now my question is If I cancel the currently creating work entry. I want to to stop the current windows service work in RUNTIME. The current thread which is creating output files for the work should get STOPPED. All the thread should killed. All the thread resources should get removed once the user requested for CANCEL.
My workaround:
I use Windows Service On Custom Command method to send custom values to the windows service on runtime. What I am achieving here is it is processing the current work or current thread (ie creating output files for the work item recieved).and then it is coming to custom command for cancelling the request.
Is there any way so that the Work item request should get stopped once we get the custom command.
Any work around is much appreciated.
Summary
You are essentially talking about running a task host for long running tasks, and being able to cancel those tasks. Your specific question seems to want to know the best way to implement this in .NET. Your architecture is good, although you are brave to roll your own rather than using existing frameworks, and you haven't mentioned scaling your architecture later.
My preference is for using the TPL Task object. It supports cancellation, and is easy to poll for progress, etc. You can only use this in .NET 4 onwards.
It is hard to provide code without basically designing a whole job hosting engine for you and knowing your .NET version. I have described the steps in detail below, with references to example code.
Your approach of using the Windows Service OnCustomCommand is fine, you could also use a messaging service (see below) if you have that option for client-service comms. This would be more appropriate for a scenario where you have many clients talking to a central job service, and the job service is not on the same machine as the client.
Running and cancelling tasks on threads
Before we look at your exact context, it would be good to review MSDN - Asynchronous Programming Patterns. There are three main .NET patterns to run and cancel jobs on threads, and I list them in order of preference for use:
TAP: Task-based Asynchronous Pattern
Based on Task, which has been available only since .NET 4
The prefered way to run and control any thread-based activity from .NET 4 onwards
Much simpler to implement that EAP
EAP: Event-based Asynchronous Pattern
Your only option if you don't have .NET 4 or later.
Hard to implement, but once you have understood it you can roll it out and it is very reliable to use
APM: Asynchronous Programming Model
No longer relevant unless you maintain legacy code or use old APIs.
Even with .NET 1.1 you can implement a version of EAP, so I will not cover this as you say you are implementing your own solution
The architecture
Imagine this like a REST based service.
The client submits a job, and gets returned an identifier for the job
A job engine then picks up the job when it is ready, and starts running it
If the client doesn't want the job any more, then they delete the job, using it's identifier
This way the client is completely isolated from the workings of the job engine, and the job engine can be improved over time.
The job engine
The approach is as follows:
For a submitted task, generate a universal identifier (UID) so that you can:
Identify a running task
Poll for results
Cancel the task if required
return that UID to the client
queue the job using that identifier
when you have resources
run the job by creating a Task
store the Task in a dictionary against the UID as a key
When the client wants results, they send the request with the UID and you return progress by checking against the Task that you retrieve from the dictionary. If the task is complete they can then send a request for the completed data, or in your case just go and read the completed files.
When they want to cancel they send the request with the UID, and you cancel the Task by finding it in the dictionary and telling it to cancel.
Cancelling inside a job
Inside your code you will need to regularly check your cancellation token to see if you should stop running code (see How do I abort/cancel TPL Tasks? if you are using the TAP pattern, or Albahari if you are using EAP). At that point you will exit your job processing, and your code, if designed well, should dispose of IDiposables where required, remove big strings from memory etc.
The basic premise of cancellation is that you check your cancellation token:
After a block of work that takes a long time (e.g. a call to an external API)
Inside a loop (for, foreach, do or while) that you control, you check on each iteration
Within a long block of sequential code, that might take "some time", you insert points to check on a regular basis
You need to define how quickly you need to react to a cancellation - for a windows service it should be within milliseconds, preferably, to make sure that windows doesn't have problems restarting or stopping the service.
Some people do this whole process with threads, and by terminating the thread - this is ugly and not recommended any more.
Reliability
You need to ask: what happens if your server restarts, the windows service crashes, or any other exception happens causing you to lose incomplete jobs? In this case you may want a queue architecture that is reliable in order to be able to restart jobs, or rebuild the queue of jobs you haven't started yet.
If you don't want to scale, this is simple - use a local database that the windows service stored job information in.
On submission of a job, record its details in the database
When you start a job, record that against the job record in the database
When the client collects the job, mark it for delayed garbage collection in the database, and then delete it after a set amount of time (1 hour, 1 day ...)
If your service restarts and there are "in progress jobs" then requeue them and then start your job engine again.
If you do want to scale, or your clients are on many computers, and you have a job engine "farm" of 1 or more servers, then look at using a message queue instead of directly communicating using OnCustomCommand.
Message Queues have multiple benefits. They will allow you to reliably submit jobs to a central queue that many workers can then pick up and process, and to decouple your clients and servers so you can scale out your job running services. They are used to ensure jobs are reliably submitted and processed in a highly decoupled fashion, and this can work locally or globally, but always reliably, you can even then combine it with running your windows service on cloud workers which you can dynamically scale.
Examples of technologies are MSMQ (if you want to maintain your own, or must stay inside your own firewall), or Windows Azure Service Bus (WASB) - which is cheap, and already done for you. In either case you will want to use Patterns and Best Practices for Enterprise Integration. In the case of WASB then there are many (MSDN), many (MSDN samples for BrokeredMessaging etc.), many (new Task-based API) developer resources, and NuGet packages for you to use

How do I cancel and roll back part of a workflow

I have a very long running workflow that moves video files around between video processing devices and then reports the files state to a database which is used to drive a UI
At times the users press a button on the UI to "Accept" a file into a video storage server. This involves copying a file from one server to another.]
They have asked if this activity can be cancelled.
I've looked at the wf4 documentation and I can't see a way to roll back part of a workflow.
Is this possible and what technique should I use.
The are two basic inbuild activities for reverting work.
The TransactionScope for ACID transaction
The Compensable activity for long running work.
With the Compensable activity you add activities to the compensation handler to undo work previously done. The Compensate activity can be used to trigger compensation. If there is no compensation you will get the confirmation handler either at the end of the workflow automatically or when you use the Conform activity.
See A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 by Matt Milner for more details.
Okay, so let's first say that the processing of "rolling back" what was already uploaded will have to be done by hand, so where ever you're storing those chunks you'll need to clean up by hand when they cancel.
Now, on to the workflow itself, in my opinion you could setup your FlowChart like this:
Alright so let's break down this workflow. The entire service should be correlated on some client key so that way you can start the service with Start once per client to keep the startup costs down.
Next, when said client wants to start a transfer you'll call BeginTransfer which will move into the transfer loop. The transfer loop is setup so that you can cancel between chunks if necessary by calling CancelTransfer.
That same branch, in this model, is used to finish the transfer as well because it gets out of the loop, so when your done transferring chunks just call CancelTransfer (if you don't like that just setup a different branch that looks exactly the same).
Finally, when you're in the process loop, you can SoftExit the entire workflow and shut it down so that you can kill it softly if there is necessary maintenance or when the client is finished with its connection it needs to call SoftExit to dispose of it.
not sure if I totally understand your scenario but I think you would need to run your transfer process on an asynchronous thread, that from time to time check a "cancel" variable to perform a rollback. This variable can be modified on the main thread on your UI.
Of course, this will allow you to cancel between transfers, not in the midle on one single transfer.

Multi-threading

The application I'm currently working on performs some I/O or CPU intensive actions (file compression, file transfers, communicating with third party APIs, etc.) that occur when a user presses a 'Send' button.
I'm currently trying to persuade my employers that we should push these actions out to separate threads inside the main application (we'd need a maximum of two worker threads active at any given time), but my colleague has claimed that:
Any extra processing executed on a low priority thread could affect the usability of the GUI.
My view was that pushing I/O or CPU intensive activity to worker threads, updating the UI with Invoke calls during progress reporting, is pretty standard practise for handling intensive activity.
Am I incorrect? If so, could someone provide an explanation?
EDIT:
Thank you for the answers so far.
I should clarify: the colleague's solution to non-blocking is to spawn a child process containing a timer loop that scans a folder and processes the file compression/transfer activities. (Note that this doesn't cover the calls to third party APIs - I have no idea what his solution there would be.
The main issue with this approach is that the main application loses all scope on the state of whatever activity., leading to, IMHO, further complexity (his solution to progress reporting is to expose the Windows message pump in both processes and send custom messages between the two processes).
You are correct. Background threads are the very essence of keeping the UI active, precisely as you have described, via Invoke operations. Keeping everything on the GUI thread will, eventually clog up the plubming and make the GUI unresponsive.
The definitive answer would be to implement it as a proof-of-concept and then profile the app to see what sort of potential performance hit may or may not exist. Maybe on a
Having said that - it sounds like rubbish to me. In fact, it's quite the opposite - using additional threads are often the best way to keep the UI responsive.
Especially with things like the Task Parallel Library, it's just not very difficult to basic multi-threading.
Yes, you are correct. The general principle is that the thread that's responsible for responding to the user and keeping the user interface up to date (usually referred to as the UI thread) should never be used to perform any lengthy operation.
As a rule of thumb, anything that could take longer than about 30ms is a candidate for removal from the UI thread. This is a little aggressive — 30ms is about the shortest interval that most people will perceive as being anything other than instantaneous and it's actually slightly less than the interval between successive frames shown on a movie screen.

Thread Aborted?

Hi,
I have a ASP.NET application where I have added a Webservice that contains a "fire and forget" method. When this method is executed it will start a loop (0-99999) and for every loop it will read a xml file and save it to the database.
The problem is that this action will take a couple of hours and it usually ends with a Thread Aborted exception?
I have seen that you can increase the executionTimeout and this is how :
<httpRuntime executionTimeout="604800"/>
<compilation debug="true">
But this does not help?
I have also tried to add a thread.sleep within the loop. If I set it to 500 it will go half way and if I set <100 it will just go a couple of 1000 loops before the thread aborted exception?
How can I solve this?
Don't run the loop inside the web service. Instead, have it in a console app, a winforms app, or possibly even a windows service. Use the web service to start up the other program.
A web service is basically a web page, and asp.net web pages are not meant to host long running processes.
This article does not directly answer your question, but contains a snippet of info relevant to my answer:
http://msdn.microsoft.com/en-us/magazine/dd296718.aspx
However, when the duration of the
operation grows longer than the
typical ASP.NET session duration (20
minutes) or requires multiple actors
(as in my hiring example), ASP.NET
does not offer sufficient support. You
may recall that the ASP.NET worker
processes automatically shut down on
idle and periodically recycle
themselves. This will cause big
problems for long-running operations,
as state held within those processes
will be lost.
and the article is a good read, at any rate. It may offer ideas for you.
Not sure if this is 'the answer', but when you receive the web service call you could consider dispatching the action onto another thread. That could then run until completion. You would want to consider how you ensure that someone doesn't kick off two of these processes simultaneously though.
I have a ASP.NET application where I
have added a Webservice that contains
a "fire and forget" method. When this
method is executed it will start a
loop (0-99999) and for every loop it
will read a xml file and save it to
the database.
Lets not go into that I fhind this approach quite... hm... bad for many reasons (like a mid of the thing reset). I would queue the request, then return, and have a queue listener do the processing with transactional integrity.
Anyhow, what you CAN do is:
Queue a WorkItem for a wpool thread to pick things up.
Return immediately.
Besides that, web services and stuff like this are not a good place for hourly long running processes. Tick off a workflow, handle it separately.

ASP.NET Threading: should I use the pool for DB and Emails actions?

I’m looking for the best way of using threads considering scalability and performance.
In my site I have two scenarios that need threading:
UI trigger: for example the user clicks a button, the server should read data from the DB and send some emails. Those actions take time and I don’t want the user request getting delayed. This scenario happens very frequently.
Background service: when the app starts it trigger a thread that run every 10 min, read from the DB and send emails.
The solutions I found:
A. Use thread pool - BeginInvoke:
This is what I use today for both scenarios.
It works fine, but it uses the same threads that serve the pages, so I think I may run into scalability issues, can this become a problem?
B. No use of the pool – ThreadStart:
I know starting a new thread takes more resources then using a thread pool.
Can this approach work better for my scenarios?
What is the best way to reuse the opened threads?
C. Custom thread pool:
Because my scenarios occurs frequently maybe the best way is to start a new thread pool?
Thanks.
I would personally put this into a different service. Make your UI action write to the database, and have a separate service which either polls the database or reacts to a trigger, and sends the emails at that point.
By separating it into a different service, you don't need to worry about AppDomain recycling etc - and you can put it on an entire different server if and when you want to. I think it'll give you a more flexible solution.
I do this kind of thing by calling a webservice, which then calls a method using a delegate asynchronously. The original webservice call returns a Guid to allow tracking of the processing.
For the first scenario use ASP.NET Asynchronous Pages. Async Pages are very good choice when it comes to scalability, because during async execution HTTP request thread is released and can be re-used.
I agree with Jon Skeet, that for second scenario you should use separate service - windows service is a good choice here.
Out of your three solutions, don't use BeginInvoke. As you said, it will have a negative impact on scalability.
Between the other two, if the tasks are truly background and the user isn't waiting for a response, then a single, permanent thread should do the job. A thread pool makes more sense when you have multiple tasks that should be executing in parallel.
However, keep in mind that web servers sometimes crash, AppPools recycle, etc. So if any of the queued work needs to be reliably executed, then moving it out of process is a probably a better idea (such as into a Windows Service). One way of doing that, which preserves the order of requests and maintains persistence, is to use Service Broker. You write the request to a Service Broker queue from your web tier (with an async request), and then read those messages from a service running on the same machine or a different one. You can also scale nicely that way by simply adding more instances of the service (or more threads in it).
In case it helps, I walk through using both a background thread and Service Broker in detail in my book, including code examples: Ultra-Fast ASP.NET.

Categories