Workflow Foundation 4 - Showing Workflow Progress - c#

In a project that we are using WF4, it is a requirement to show our users a friendly list of the steps of the workflow (the logical steps from the users point of view, not the technical steps) along with each step's status ( like a big green check mark if the step has been completed).
I'm wondering if this is something that Workflow tracking should be used for or not. My impression from what I have read about workflow tracking is that it is really more for technical logging.
The alternatives would be persisting an ordered list of steps and their statuses along with the workflow or outside of the workflow.
Either way I'm fuzzy about how this should work and appreciate suggestions.

Workflow Services would be very useful for you. This is some sort of convergence point between wf and wcf and used to marshal WCF services calls to WF instances. You can create a duplex channel and receive updates via callback channel.

You can use workflow tracking to register activity execution and use the WorkflowDesigner to show the progress to the user. There is an example of how to do this in the WF4 samples from Microsoft you can find here.

Related

How to allow the user to create tasks and set the time when it runs?

I’m building a project that allows the user to send notifications to his customer.
I want the user to also be able to schedule these notifications to run at a specific time instead right now.
How would I achieve this? What’s the best structure? Should I create a cron job that hits the server every second to check if a task is set at this second?
You can create your own code that runs the tasks by a schedule, but there are already several 3d party tools for that. I would recommend you to have a look at
Hangfire or Quartz.NET
I would recommend to capture the details from your customer but in the backend pass on the details to the logic apps or power flow which can be timer based to create a job within those apps. I would recommend power flow as you can use it out of the box where as for logic apps you may have to configure logic apps first. as you tagged the question to Azure i am guessing you are using azure stack.
if not above suggestion may not work for you.

Using a saga for controlling task queue workflow in C#

I am looking to build a distributed task system, in which agents will perform tasks according to a certain workflow
It seems like the concept of Sagas are perfect for this use case, in which there are 2 patterns:
1) Controller saga: a dedicated machine sends a command, waits for a reply, then sends the next command in the sequence, etc...
2) Routing slip saga: the steps are recorded in advance in the message itself.
I would like to get your opinion on these issues:
1) are sagas indeed perfect for this use case?
2) which one of them is preferred for this use case?
3) if only some of the machines are able to perform certain tasks: how do I make sure that none of the other agents won't pick the message up? (example: a task might be "execute this stored procedure" and I want it to only run on an agent that is dedicated for the database)
EDIT (2015-10-24): (more information about the workflow)
The workflow I'm looking for is something along this line: a 10 hours long divided into 10 chunks (mini-tasks). the dependency graph allows for some of these to happen concurrently while some of them will have to finish before next one is queued up. I plan to incorporate this workflow logic (dependencies) into the machine running the controller (=saga).
It would be optimal if I could change the workflow easily (for example: insert another task in the workflow between "step 7" and "step 8" (both of these are mini-tasks).
Each agent will run a few tasks concurrently (the exact number preferrably dictated by cpu/IO utilization) (i.e. might run step 3 of workflow #1 and step 5 of workflow #2)
Thanks
1) are sagas indeed perfect for this use case?
Perfect might be a bit much, but it's a good way to handle many workflows.
2) which one of them is preferred for this use case?
Your updated workflow suggests that a Saga would be a great choice for the workflow. Adding steps would require code changes and deployment, but handling long running workflows with many steps seems perfect. Also, coordinating the completion of multiple async steps before a next step is a common use case I have used sagas for.
3) if only some of the machines are able to perform certain tasks: how do I make sure that none of the other agents won't pick the message up?
By types. Each activity has a specific message type corresponding to the action. E.g. "GetReportData" (executes a stored proc?). You'll have one group of services with consumers for that message type. Only they will receive messages published with that type. If it's more complicated than that, e.g. GetReportData but only for Customer A's machine not Customer B's, then you get into Content Based Routing. This is generally looked poorly upon, and you might want to find another way to model your work, if possible. Content based routing is not something that is supported in MassTransit.
Orchestration
Sagas work well for orchestrations and especially long running orchestrations. I've personally worked on a setup where we had to convert all kinds of media like images, video files but also powerpoint, pdf, subtitles etc. and NServiceBus Sagas were used where it previously used was build on a polling database table that caused congestion issues.
Controller vs Routing slip
Both controller and routing slips variations can be used. You mention that you want to change the workflow easily but did not mention if you want to easily change an already instantiated workflow. Controller types are easier to 'update' and routing slips are very good on workflows that must not change.
Routing slip carry their flow with them so the workflow can easily be radically changed without affecting existing instances. Its hard to change existing instances, controllers are the opposite, flow can be modified but need to be backwards compatible.
There are other variations too, see this post by Jimmy Bogard:
https://lostechies.com/jimmybogard/2013/05/14/saga-patterns-wrap-up/
Changing workflow
Usually the event that creates the saga instance does the setup for the rest of the steps. This becomes part of the saga state. If the workflow is changed, then this cannot influence existing saga instances unless you explicitly want to or if you hardcode steps using if statements.
My experience with the media conversion sagas is that the workflow fetched the tasks to be executed, kept them in saga state and iterated these steps.
Message pattern
The tasks should be a command that should be modelled as asynchronous request/response. Based on the response you execute the next step(s). Pubsub does not really work well as multiple 'workers' would receive the same 'event'.
Task
Create a message per task. Create a consumer that knows how to process this message.
For example:
Service X knows how to process A, B and C
Service Y knows how to process D and E
Scaling
If Service X needs additional resources then you can scale out using either a distribution pattern (MSMQ) or using competing consumer (RabbitMQ, Azure Storage Queues, etc.).
Content Based Routing (CBR)
Avoid to have constructions like
Service X can process A, B and C but instance 1 supports A and B and instance 2 supports C.
Probably better to then split it in three services.
Services X and Y both know how to process D
How are you deciding to which service to send to command/request?
As mentioned, MassTransit does not support CBR and its the same for NServiceBus as CBR is often misused.
See this post by Udi Dahan:
http://udidahan.com/2011/03/20/careful-with-content-based-routing/
I'm not sure if I understand your question completely, but...
I'd rather go for agents pulling tasks. So each agent dequeues a task from the tasklist suitable for 'him'. The tasks should be tagged on type, so the right agent can pick it up. Every time an agent is ready with a task, it can grabs another task. When the agent grabs a task, it will be marked as busy. (you could hold a timestamp to detect timeouts)

Bookmark with timeout in WF 4.0 when persisted

I have been looking around for a while now and I want to create a timeout property on a bookmark in WF 4.0.
I can make it work with using a Picker with two different branches (and have a timer in one of them and my bookmark in the other).
However this does not work if my workflow is persisted to the database (which it will be since the timeout will be several days) since it will not trigger until i load the workflow next time which can be several days also.
Does anyone know if there is any other way to solve this in the WF 4.0? Or have you done a great workaround?
Okay so what you're going to want to do is build a Workflow Service, you will not be able to do this via a workflow that is not hosted via the Workflow Service Host (WSH) near as easily. To tell you it can't be done would be incorrect, but I can tell you that you don't want to.
That service will be available via a WCF endpoint and can do exactly what you're needing. You would be able to build a workflow that had a pick branch that had two things in it, the first is a Receive activity that could be called into by the user if they responded in time. The second would be a durable timer that ticked at a specified interval and would allow you to branch down another path. Now this same service can have more than one Receive activity and thus exposing more than one endpoint so if your workflow has any other branches just like this you can handle all of those in one atomic workflow.
Does this make sense?

NService Bus - Content based routing & auditing - is my approach ok?

I have a little trouble deciding which way to go for while designing the message flow in our system.
Because the volatile nature of our business processes (i.e. calculating freight costs) we use a workflow framework to be able to change the process on the fly.
The general process should look something like this
The interface is a service which connects to the customers system via whatever interface the customer provides (webservices, tcp endpoints, database polling, files, you name it). Then a command is sent to the executor containing the received data and the id of the workflow to be executed.
The first problem comes at the point where we want to distribute load on multiple worker services.
Say we have different processes like printing parcel labels, calculating prices, sending notification mails. Printing the labels should never be delayed because a ton of mailing workflows is executed. So we want to be able to route commands to different workers based on the work they do.
Because all commands are like "execute workflow XY" we would be required to implement our own content based routing. NServicebus does not support this out of the box, most times because it's an anti pattern.
Is there a better way to do this, when you are not able to use different message types to route your messages?
The second problem comes when we want to add a monitoring. Because an endpoint can only subscribe to one queue for each message type we can not let all executors just publish a "I completed a workflow" message. The current solution would be to Bus.Send the message to a pre configured auditing endpoint. This feels a little like cheating to me ;)
Is there a better way to consolidate published messages of multiple workers into one queue again? If there would not be problem #1 I think all workers could use the same input queue however this is not possible in this scenario.
You can try to make your routing not content-based, but headers-based which should be much easier. You are not interested if the workflow is to print labels or not, you are interested in whether this command is priority or not. So you can probably add this information into the message header...

Windows Workflows & Global events

I'm relatively new to using Windows Workflow but we have a requirement whereby all currently active workflows undertake an action based upon a "global event" rather than an event based upon a single instance.
e.g. you could have a workflow which is used for the submission and tracking of tickets, with the scenario that when the support desk goes home all of the active workflows generate an e-mail to the person who submitted the ticket saying that their ticket won't be looked at today.
What is the best approach to do this?
Is it a custom activity or some other method of enumerating all of the active workflows and firing an event/queueing an item to the workflow queue?
Clearly from the workflow perspective it would be nice to have an activity within it which is fired when, in the case of the example above the office closes.
All input gratefully received.
It depends on how you are hosting your workflows. Using workflow services and WCF messaging is by far the easier option and would be my preference.
Assuming you are using workflow services with persistence enabled you can easily get a list of each workflow instance from the store so you can send the WCF message to them. Using the active bookmarks in the instance store you can actually see if the workflow supports the operation in question at the moment.
If you are self hosting using things are a lot harder ad you will need to create a custom activity with a bookmark to handle this. But assuming workflows can be unloaded from memory you will need some external code to reload the workflows.
BTW workflow queues are a WF3 feature that has been replaced by bookmarks in WF4.
One way to do this would be to get the application hosting the Workflow Runtime to enqueue a work item into the workflow queue. All activities that need to respond to this stimulous must have bookmark for this queue.

Categories