I am creating a WCF service (CALLER) for Azure. The service(CALLER) calls async methods of another third party service(EXTN). The third party service calls the callback methods of another WCF service (LISTNER) hosted by me on Azure. CALLER enter the service details in the databsae with status = PENDING.
In the callback service (LISTNER) I am updating the status of the request as COMPLETED/FAILED in the database.
But I want the CALLER should be notified when status is updated in the SQL Azure db.
I am thinking of creating a worker thread which will poll the database periodically to check the status update and notify the CALLER about this.
Is there any other better / efficient alternative to this approach?
The features you're looking for are implemented in the AppFabric service bus.
Not really. There is another way (not sure it works on azure) by using a the integrated SQL message queueing (queue on updates via trigger), and your thread could continously poll then (there is a way to have a the read WAIT for an etnry in teh queue, so you issue one and it waits), but besides that...
...no, not from the database level.
I have a similar application and I handle it by a ntification trigger OUTSIDE The database (i.e. notifications are sent from the business logic that values change).
Another option is to use Queues and have the caller poll for notification messages from the listener. The Service Bus can be used, by having the Caller subscribe to event notifications sent from the Listener. In your scenario though it doesn't provide much more than the Queues do - if you are behind the firewall, the Service Bus uses polling as well.
Queues are probably the most efficient way to send notifications - that's why they were created in the first place. The Service Bus is used to create semi-permanent connections between different services by providing a lot more features than simple message passing. That makes it a bit less flexible, requires a bit more programming. Its billing model (charge per SB connection) reflect this too. You are not expected to use a lot of SB connections.
Related
I'm trying to understand the difference between a queue trigger and a service bus queue trigger and which one I need!
I have a asp.net mvc site that is for creating and scheduling classes, which will be represented as a row in a db table. When a class is over I want to send an email to all students asking them to rate their teacher
As far as I'm aware the best way to do this is to create a Azure function that will create and send the emails, but where I'm lost is how to trigger that function at a specific date and time?
Do I use a queue trigger or a service bus queue trigger? What's the difference between the two and which one would be the best for my scenario?
I need to be able to cancel the message in the queue if the class is canceled.
If cancelling scheduled messages is a hard requirement, only Service Bus allows doing that, see this blog.
However, it might be more practical to just add a check whether the class was cancelled at the beginning of your Azure Function, and quit if so.
For the rest of your scenarios, the services will both fit.
Generally, Service Bus has more advanced capabilities and guarantees, but costs more money if you send lots of messages. I usually pick Storage Queues unless I need any of those more advanced features.
I have a windows service written in C# that reads from MSMQ and based on the type of the message it assigns them to Agents that process that message in a worker thread. The application starts with no agents and are created dynamically at runtime as messages arrive in the MSMQ
Here is a basic figure of how it works:
If the agent worker thread is busy doing work the message is queued to its local queue. So far so good. But if for some reason if the service is stopped, the local queue content is lost.
I am trying to figure out what could be the best way to handle this scenario. Right now the local queues are a System.Concurrent.ConcurrentQueue. I could probably use a Sql Ce db or some other persistent storage, but i am worried about performance. The other thing in my mind is to read from MSMQ only when agents are ready to process message, but the problem is that I don't know what message the MSMQ will contain.
What possible approaches can I take on this issue?
Your design is basically implements the following pattern: http://www.eaipatterns.com/MessageDispatcher.html
However, rather than using actual messaging you are choosing to implement the dispatcher in multithreaded code.
Rather, each processing agent should be an autonomous process with it's own physical message queue. This is what will provide message durability in case of failure. It also allows you to scale simply by hosting more instances of the processing agent.
I have built a similar system dependent on Redis. The idea is that it provides memory - fast data access isolated from the rest of the application, and will not shut down when my service does. Furthermore, it will eventually persist my data to the disk, so I get a good compromise between reliability and speed.
If you designed it so that each client read from its own message queue that would be hosted in Redis, you could keep the queue independent from the service's downtime, and each worker's load apportioned when you next start the service.
Why don't you simply create two new msms queues to receive the messages for Agenta and agentb, and create a new agent that ( transactionally ) fetch the command from the main queue and dispatch the message to the proper agent queue ?
What design has someone successfully used to implement job processing on Windows Azure?
Requirements:
Ability to push a Job into a queue.
N workers can consume Jobs from the queue and process them.
Invoker of the job should be able to be alerted (push, not polling) of the job being completed.
Research thus far:
Create a "Job" Queue using Azure Service Bus Queues (http://blogs.msdn.com/b/appfabric/archive/2011/05/17/an-introduction-to-service-bus-queues.aspx)
Web front-end pushes Jobs to the queue, workers block on Receive() indefinitely (see http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.brokeredmessage.aspx) until a Job is ready (to avoid "null" long polling, which costs money due to API call transaction costs)
With regards to being notified of Job completion:
There is no apparent ability to be alerted to when a Job has been completed.
I thought I could leverage Service Bus Topics/Subscriptions (https://www.windowsazure.com/en-us/develop/net/how-to-guides/service-bus-topics/) and have a caller "subscribe to" a "Job Finished Notifications" topic, however:
You apparently can't subscribe more than once to the same topic, unless you create multiple "Subscription" entries (which does not scale)
Unless we did create a "Subscription" for each Job Id, and had the caller block on a Receive() API call (using I/O completion ports) on that subscription, we can't get real time notifications of when a Job has been processed.
Has anyone had any experience implementing this sort of Job system (real time, low latency, with completion notifications for the caller) before?
Thanks
Actually, queue does not stand by push. The whole idea about queue is the receiver does not need to receive the message in real time, and wants to check the message periodically. If you need real time communication, you can create an HTTP/TCP listener on the receiver side, and let the sender make an HTTP/TCP request.
Thus, one approach is to create a web service on the web role, using internal endpoints. You send the service's address along with the message to worker role using queue. When the job is finished, the worker role invokes the service to notify the web role that job is done.
This approach is fine, but it does not provide much value. It cannot display something on the UI(unless you implement web socket), since a server cannot notify the browser. So if you want to display a notification in a browser client, I would like to suggest you to use a pull solution (unless you implement web socket). If you're using a rich client, you can host a web service on the client machine, and let the worker role notify the client by invoking the service.
Best Regards,
Ming Xu.
The wording of the question doesn't necessarily do the issue justice...
I've got a client UI sitting on a local box with and a background windows service to support it while it performs background functions.
The client UI is just the presentation layer and the windows service does all the hard hitting action... so there needs to be communication between the two of them. After spending a while on google and reading best practices, I decided to make the service layer using WCF and named pipes.
The client UI is the WCF client and the windows service acts as the WCF host (hosting locally only) to support the client.
So this works fine, as it should. The client UI can pass data to the WCF host. But my question is, how do I make that data useful? I've got a couple engines running on the windows service/WCF host but the WCF host is completely unaware of the existence of any background engines. I need the client's communications requests to be able to interact with those engines.
Does anybody have any idea of a good design pattern or methodology on how to approach facilitating communication between a WCF host and running threads?
I think that your best bet is to have some static properties or methods that can be used to interchange data between the service threads/processes and the WCF service.
Alternatively, the way that we approach this is through the use of a database where the client or wcf service queues up requests for the service to respond to and the service, when it is available, updates the database with the responses to those requests. The client then polls the database (through WCF) on a regular basis to retrieve the results of any outstanding requests.
For example, if the client needs a report generated, we fire off a request through WCF and WCF creates a report generation request in the database.
The service responsible for generating reports regularly polls this table and, when it finds a new entry, it spins off a new thread/process that generates the report.
When the report has completed (either successfully or in failure), the service updates the database table with the result.
Meanwhile, the client asks the WCF service on a regular basis if any of the submitted reports have completed yet. The WCF service in turn polls the table for any requests that have been completed, but not been delivered to the client yet, gathers the information from them, and returns them to the client.
This mechanism allows us to do a couple of things:
1) We can scale the number of services processing these requests across multiple physical/virtual machines as the workload increases.
2) A given service can support numerous clients.
3) Through the WCF interface, we can extend this support to any client platform that we choose to support (web, win, tablet, phone, etc).
Forgot to mention:
Just because we elect to use a database doesn't mean that you have to in order to implement this pattern. You can easily implement the same functionality by creating a static request collection that the WCF service and worker service access in much the same way that we use the database.
You will just need to be very careful about properly obtaining and releasing locks on the static properties to avoid cross-thread collisions or deadlocks.
Edit (again): Let me simplify my problem. I have a Windows Service that exposes some WCF endpoints with methods like:
int ExecuteQuery(string query) {
// asynchronously execute query that may take 1 second to 20 minutes
return queryId;
}
string GetStatus(int queryId) {
// return the status of the query (# of results so far, etc)
}
What is the best way to implement the ExecuteQuery method? Should I just call ThreadPool.QueueUserWorkItem to get my query going?
Note that the actual work behind executing a query is done by load-balanced black box. I want to be able to have several queries going at the same time.
The analogy is a web browser that is downloading multiple files simultaneously and you have a download manager that can track the status of each file.
Take a look at Microsoft Message Queuing (MSMQ):
Microsoft Message Queuing (MSMQ) technology enables applications running at different times to communicate across heterogeneous networks and systems that may be temporarily offline. MSMQ provides guaranteed message delivery, efficient routing, security, and priority-based messaging. It can be used to implement solutions for both asynchronous and synchronous messaging scenarios.
It's good to know that Windows Communication Foundation (WCF) can leverage queuing services offered by MSMQ.
Either this is a trick question or a no-brainer... ThreadPool.QueueUserWorkItem is about the easiest way to go when you want to execute a piece of code concurrently. I'm sure you already knew that, so technically you have already answered your own question.
So if this is not a trick question, then are you asking exactly how to pass the query in the ThreadPool.QueueUserWorkItem?
I use a Windows service for a very similar task and it works very well. I use database tables to queue requests and responses, as it gives me a persistent queue that can be accessed over the network from remote ASP.Net applications, and concurrency control through transactions.
A supervisor thread on a timer spawns workers whenever incoming requests need servicing. I use a separate database tables for configuration and control so that I can administer the service and pause the supervisor from an application without while leaving the service core running. Logging to a separate table is a convenient way to see what's happening from web apps and a local admin app.
I wouldn't use the ThreadPool for long-running threads, but instead create a worker class that runs in its own thread and uses callback methods to update the supervisor with progress and completion status.
Adding to the MSMQ answer, you could think about looking at using an Enterprise Service Bus (ESB) to handle these sorts of things, if future scalability is a concern. Check out NServiceBus for one .NET example.
I would use WWF (4.0):
You can start long running transactions that can be handle in a few machines, execute task in parallel, failure support, friendly coding, you can manage it with appfabric, it is free...