For a client the system we're creating must support the following:
- It must be possible to run multiple workflows, and multiple instances of the same workflows with a different context (different data/business objects).
- Some workflows will be long-running, involve multiple users/client session and waiting for external user input. So the workflows must be able to be persisted and respond to some signal from a client app. And it also means that the execution of workflows must be done on a server app (right?).
- I want to be able to run all kinds of workflows on the server app, and I do not want to have to re-deploy the server app when a workflow changes.
My first thought was Workflow Services. After a lot of research I concluded that this is not the right path since Workflow Services basically gives the possibility to execute activities at a remote location from a workflow started in a client app. Is this correct? Or can I use Workflow Services for the scenario above? Most examples and/or tutorials are basically a ReceiveSignal/Send combination with some logic in between.
Basically I want to initiate (from a client app) the start of a workflow with a specific context (in the workflow server app).
What is the best approach?
Any help is very much appreciated!
As for your requirements:
It must be possible to run multiple workflows, and multiple instances of the same workflows with a different context (different data/business objects).
This is no problem with WF.
Some workflows will be long-running,
involve multiple users/client session
and waiting for external user input.
So the workflows must be able to be
persisted and respond to some signal
from a client app. And it also means
that the execution of workflows must
be done on a server app (right?).
WF is designed for long running tasks that can interact with outside influences. However, that doesn't say its easy to accomplish; there is no universal solution which you can hook into. You will probably have to design custom Activities that interact with Workflow Extensions which handle moving user input into the workflow. Same with exposing the workflow to the outside, although WF4 does come with a host of WCF activities which could be used to accomplish this.
I want to be able to run all kinds of
workflows on the server app, and I do
not want to have to re-deploy the
server app when a workflow changes.
This is harder to accomplish. You must, at a minimum, separate the workflows from the server code. The simplest route is to store your workflow as xaml and load it at runtime from, say, a database.
Other options are to use some kind of dependency injection framework (such as Structure Map or Unity) which loads the workflow assembly at runtime. If the workflows change, you can drop the new assembly on the server, change your config and restart. Alternatively, you can isolate your workflow assemblies within their own AppDomain, load them at runtime and throw away the domain when you must reload a new version. Which one you do depends on your requirements; I'm actually doing the third option as I have to load many different versions of workflow assemblies at runtime, run them concurrently, and they often have embedded resources thus preventing me from going the XAML route.
My first thought was Workflow
Services. After a lot of research I
concluded that this is not the right
path since Workflow Services basically
gives the possibility to execute
activities at a remote location from a
workflow started in a client app. Is
this correct?
I'm hosting my workflows within a standard Windows service application. I have to manage and maintain the WCF frontend which the client uses to interact with my workflows. As far as I can tell, Workflow Services seems like a viable option for stable workflows, if I understand it correctly. Hosting an application in AppFabric is also a good option, and I believe simpler than using a Windows Service. But no matter what the host is, you have two options--your workflows define your service contract, or you must define the service contract and handle all execution and communication with workflows you are managing.
The first is a good option for stable workflows with a simple facade. It doesn't seem like a good option for you, as you must load workflows dynamically as they change; that requires logic outside of the workflow to handle not only communications from the client ("here's a new version of workflow X!") but also managing the workflow lifespan.
It seems like you will have to find some kind of host application (IIS WebService application, Windows Service, AppFabric, Azure), define your WCF service and bring it online, handle calls from clients, communicate these calls to your workflow running code, which then must load and execute these calls and return the result up the chain.
I can't help but notice that you seem slightly ill-prepared for the journey that awaits you. I would strongly suggest creating a prototype that slices neatly through the centermost meat of your requirements. A hosted application (I'd suggest AppFabric) with a WCF frontend that loads a XAML-based workflow that processes client calls. Once you have the simple version nailed down, you can widen the scope to encompass all your requirements.
Related
I am currently looking into some re-architecture for a set of applications for our organization. We currently have a set of 10-15 odd stand-alone applications that communicate to each other and provide an intermediate level between client software and hardware.
Problem with current model is lots of individual apps, which add memory overhead, latency in communication, bloat the system and make it difficult to recover from issues in case of any of these apps crashing.
I am thinking of combining the application into 1-2 logical units that help address some of these issues. The dilemma is on how to do this well:
Windows Service
UI Application
Both?
The goal is to have an always on system that will handle all of the client-hardware comms but also have a rich admin-user configuration UI that will be able to talk to all of the individual components of this system and provide config/etc capabilities. Having a WinForms/WPF application will allow easy admin-user access to the system config and provide real time feedback (camera feed, etc), but will leave this open to admins accidentally closing the window. Having a service doing all of that work is great, but I am not sure on how to provide a rich admin-user UI that interacts and changes this service.
Any ideas or links worth reading?
Thanks!
just thought I'd update my own question for anyone else that might be having a similar question.
What I went with is a centralized Windows Service that exposes a number of WCF endpoints for a number of its child components. Sitting on top of that is a UI application that communicates to the Windows Service through the WCF endpoints. To make building & debugging easier the Windows Service is configured to run as a Console application when run in debug.
This solution seems to work great so far!
I am in the process of creating an application which will communicate with a single server where WCF Web Service(s) would be installed. I am a little new to this process and was wondering which of these two options would be better in the long run to handle the load for a significant amount of users:
1- Create and install a single Web Service on a multi-core server for all of the client applications to communicate with.
2- Create and install multiple Web Services on a multi-core server, each to communicate with different modules inside of the client application.
All-in-all I'm just trying to figure out whether in processing time and with a large number of users whether there is a significant difference between options 1 and 2, or if option 2 would just create an unnecessary programming headache.
Thanks,
Patrick
The advantage of having multiple web services would be that each can have their own application pool (i.e. worker process) in IIS. So you can recycle one application pool for one web service without affecting the others.
The advantage of having a single web service would be potentially easier maintenance, since the code is in one file, etc. Of course, if it's a lot of code, this can make maintenance harder too.
So the question is, what's the right level of granularity?
You can split the web services up per business function, and I've found that this is a good approach. For example, if you have some business methods that deal with invoicing, you could put those into an Invoicing web service.
If you have other business methods that deal with shipping orders, you could put those into a Shipping web service.
This creates a nice split, in my opinion, and also lets you leverage the application pool advantages discussed earlier.
Example
You can see a real world example of this type of split with FedEx. Note how they split their web services up by shipping, tracking and visibility, etc.
I'm relatively new to using Windows Workflow but we have a requirement whereby all currently active workflows undertake an action based upon a "global event" rather than an event based upon a single instance.
e.g. you could have a workflow which is used for the submission and tracking of tickets, with the scenario that when the support desk goes home all of the active workflows generate an e-mail to the person who submitted the ticket saying that their ticket won't be looked at today.
What is the best approach to do this?
Is it a custom activity or some other method of enumerating all of the active workflows and firing an event/queueing an item to the workflow queue?
Clearly from the workflow perspective it would be nice to have an activity within it which is fired when, in the case of the example above the office closes.
All input gratefully received.
It depends on how you are hosting your workflows. Using workflow services and WCF messaging is by far the easier option and would be my preference.
Assuming you are using workflow services with persistence enabled you can easily get a list of each workflow instance from the store so you can send the WCF message to them. Using the active bookmarks in the instance store you can actually see if the workflow supports the operation in question at the moment.
If you are self hosting using things are a lot harder ad you will need to create a custom activity with a bookmark to handle this. But assuming workflows can be unloaded from memory you will need some external code to reload the workflows.
BTW workflow queues are a WF3 feature that has been replaced by bookmarks in WF4.
One way to do this would be to get the application hosting the Workflow Runtime to enqueue a work item into the workflow queue. All activities that need to respond to this stimulous must have bookmark for this queue.
I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.
I've got a C# service that currently runs single-instance on a PC. I'd like to split this component so that it runs on multiple PCs. Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine.
Data synchronization can be done by the DB, so that should not be much of an issue. My current idea is to use some kind of load balancer that splits and sends the incoming requests to the array of PCs and makes sure the work is actually processed.
How would I implement such a functionality? I'm not sure if I'm asking the right question. If my understanding of how this goal should be achieved is wrong, please give me a hint.
Edit:
I wonder if the idea given above (load balancer splitswork packages to PCs and checks for result) is feasible at all. If there is some kind of already implemented solution so this seemingly common problem, I'd love to use that solution.
Availability is a critical requirement.
I'd recommend looking at a Pull model of load-sharing, rather than a Push model. When pushing work, the coordinating server(s)/load-balancer must be aware of all the servers that are currently running in your system so that it knows where to forward requests; this must either be set in config or dynamically set (such as in the Publisher-Subscriber model), then constantly checked to detect if any servers have gone offline. Whilst it's entirely feasible, it can complicate the scaling-out of your application.
With a Pull architecture, you have a central work queue (hosted in MSMQ, Sql Server Service Broker or similar) and each processing service pulls work off that queue. Expose a WCF service to accept external requests and place work onto the queue, safe in the knowledge that some server will do the work, even though you don't know exactly which one. This has the added benefits that each server monitors it's own workload and picks up work as-and-when it is ready, and you can easily add or remove servers to/from this model without any change in config.
This architecture is supported by NServiceBus and the communication between Windows Azure Web & Worker roles.
From what you said each PC will require a full copy of your service -
Each PC should be assigned a certain
part of the work. If one PC fails, its
work should be moved to a backup
machine
Otherwise you won't be able to move its work to another PC.
I would be tempted to have a central server which farms out work to individual PCs. This means that you would need some form of communication between each machine and and keep a record back on the central server of what work has been assigned where.
You'll also need each machine to measure it's cpu loading and reject work if it is too busy.
A multi-threaded approach to the service would make good use of those multiple processor cores that are ubiquitoius nowadays.
How about using a server and multi-threading your processing? Or even multi-threading on a PC as you can get many cores on a standard desktop now.
This obviously doesn't deal with the machine going down, but could give you much more performance for less investment.
you can check windows clustering, and you have to handle set of issues that depends on the behaviour of the service (you can put more details about the service itself so I can answer)
This depends on how you wanted to split your workload, this usually done by
Splitting the same workload by multiple services
Means same service being installed on
different servers and will do the
same job. Assume your service is reading huge data from the db servers and processing them to produce huge client specific datafiles and finally this datafile is been sent to the clients. In this approach all your services installed in diff servers will do the same work but they split the work to increaese the performance.
Splitting the part of the workload by multiple services
In this approach each service will be assigned to the indivitual jobs and works on different goals. in above example one serivce is responsible for reading data from db and generating huge data files and another service is configured only to read the data file and send it to clients.
I have implemented the 2nd approach in one of my work. Because this let me isolate and debug the errors in case of any failures.
The usual approach for load balancer is to split service requests evenly between all service instances.
For each work item (request) you can store relative information in database. Then each service should also have at least one background thread checking database for abandoned work items.
I would suggest that you publish your service through WCF (Windows Communication Foundation).
Then implement a "central" client application which can keep track of available providers of your service and dish out work. The central app will act as scheduler and load balancer of the tasks to be performed.
Check out Juwal Lövy's book on WCF ("Programming WCF Services") for a good introduction on this topic.
You can have a look at NGrid : http://ngrid.sourceforge.net/
or Alchemi : http://www.gridbus.org/~alchemi/index.html
both are grid computing framework with load balancers that will get you started in no time.
Cheers,
Florian