I would like to pass an object from a C# application to be processed? How may I accomplish this task?
Create a socket and pass the data over TCP using the Network library
You might want to implement a queue of some sort. Reasons:
This ensures that your objects can queue up while processing occurs.
If the processor is down, you don't lose any submitted objects as they will wait in the queue.
Decouples your C# application and your processing service (or whatever it may be)
The type of queue you need depends on your environment. Here are two good options:
MSMQ and how to interface with it in C# for local or network-based applications
Windows Azure Queue or Service Bus if you working in the Azure cloud
Related
I have a windows service written in C# that reads from MSMQ and based on the type of the message it assigns them to Agents that process that message in a worker thread. The application starts with no agents and are created dynamically at runtime as messages arrive in the MSMQ
Here is a basic figure of how it works:
If the agent worker thread is busy doing work the message is queued to its local queue. So far so good. But if for some reason if the service is stopped, the local queue content is lost.
I am trying to figure out what could be the best way to handle this scenario. Right now the local queues are a System.Concurrent.ConcurrentQueue. I could probably use a Sql Ce db or some other persistent storage, but i am worried about performance. The other thing in my mind is to read from MSMQ only when agents are ready to process message, but the problem is that I don't know what message the MSMQ will contain.
What possible approaches can I take on this issue?
Your design is basically implements the following pattern: http://www.eaipatterns.com/MessageDispatcher.html
However, rather than using actual messaging you are choosing to implement the dispatcher in multithreaded code.
Rather, each processing agent should be an autonomous process with it's own physical message queue. This is what will provide message durability in case of failure. It also allows you to scale simply by hosting more instances of the processing agent.
I have built a similar system dependent on Redis. The idea is that it provides memory - fast data access isolated from the rest of the application, and will not shut down when my service does. Furthermore, it will eventually persist my data to the disk, so I get a good compromise between reliability and speed.
If you designed it so that each client read from its own message queue that would be hosted in Redis, you could keep the queue independent from the service's downtime, and each worker's load apportioned when you next start the service.
Why don't you simply create two new msms queues to receive the messages for Agenta and agentb, and create a new agent that ( transactionally ) fetch the command from the main queue and dispatch the message to the proper agent queue ?
I have a basic windows service which does some conversions of data. There's decoupled GUI which allows user to changes some configuration and this needs to be proprogated to the Windows Serivice running. Both of them are running the same box and implemented using C# .NET. Which is the best way to communicate to the service other than interprocess communication mechanisms like mutex, events etc.
Also I'd like to avoid to implement it as a web service because it's not a webservice.
I would use a WCF Service to communicate.
You can use netNamedPipe binding but that might not work on Windows 2008/Windows 7 since the Service runs in session 0 and all user code runs in sessions >0 and they would not be able to communicate.
So I used netTcpBinding in my own project.
If the processes are not going to move to different machines, you can use memory mapped files as the communication mechanism.
If that's not the case, WCF is a good option.
Since you're dealing with configuration data for the service, I would persist it somewhere. Database, file, registry, etc. UI writes the information and the service reads it when appropriate (e.g. each run).
The wording of the question doesn't necessarily do the issue justice...
I've got a client UI sitting on a local box with and a background windows service to support it while it performs background functions.
The client UI is just the presentation layer and the windows service does all the hard hitting action... so there needs to be communication between the two of them. After spending a while on google and reading best practices, I decided to make the service layer using WCF and named pipes.
The client UI is the WCF client and the windows service acts as the WCF host (hosting locally only) to support the client.
So this works fine, as it should. The client UI can pass data to the WCF host. But my question is, how do I make that data useful? I've got a couple engines running on the windows service/WCF host but the WCF host is completely unaware of the existence of any background engines. I need the client's communications requests to be able to interact with those engines.
Does anybody have any idea of a good design pattern or methodology on how to approach facilitating communication between a WCF host and running threads?
I think that your best bet is to have some static properties or methods that can be used to interchange data between the service threads/processes and the WCF service.
Alternatively, the way that we approach this is through the use of a database where the client or wcf service queues up requests for the service to respond to and the service, when it is available, updates the database with the responses to those requests. The client then polls the database (through WCF) on a regular basis to retrieve the results of any outstanding requests.
For example, if the client needs a report generated, we fire off a request through WCF and WCF creates a report generation request in the database.
The service responsible for generating reports regularly polls this table and, when it finds a new entry, it spins off a new thread/process that generates the report.
When the report has completed (either successfully or in failure), the service updates the database table with the result.
Meanwhile, the client asks the WCF service on a regular basis if any of the submitted reports have completed yet. The WCF service in turn polls the table for any requests that have been completed, but not been delivered to the client yet, gathers the information from them, and returns them to the client.
This mechanism allows us to do a couple of things:
1) We can scale the number of services processing these requests across multiple physical/virtual machines as the workload increases.
2) A given service can support numerous clients.
3) Through the WCF interface, we can extend this support to any client platform that we choose to support (web, win, tablet, phone, etc).
Forgot to mention:
Just because we elect to use a database doesn't mean that you have to in order to implement this pattern. You can easily implement the same functionality by creating a static request collection that the WCF service and worker service access in much the same way that we use the database.
You will just need to be very careful about properly obtaining and releasing locks on the static properties to avoid cross-thread collisions or deadlocks.
I am fixing a .net app written on top of nServiceBus and have a few questions:
The app is configured AsA_Publisher and when it starts it waits for incoming
connections on a socket, do you know why it might have been implemented like so?
Why sockets? This socket is created during the Run method of a class which implements class IWantToRunAtStartup.
Once a message arrives, the message is written to a queue (Q1). The message
is then read from the queue(Q1). The format of the message is changed and then
inserted into yet another queue (Q2). The message is then read from the queue
(Q2) and sent to another application by calling a web service. The whole idea is
to change the message format and send it off to the final destination. If
nServiceBus is built on top of MSMQ, then why is the application creating more
queues and managing them?
I see absolutely nothing about Publish or Subscribe anywhere in the project. I guess it is relying on the socket to receive messages and if so then it is not really taking advantage of nServiceBus's queuing facility? Or am I lost...
If queues are needed and if I was to build this I will have one app writing to
the queue (Q1), another app reading from the queue (Q1) and changing the format
and inserting to another queue (Q2) and finally a third app reading from the
(Q2) and sending it off to the web service. What do you think?
Thanks,
I see nothing wrong with opening a socket in Run in an IWantToRunAtStartup. It must somehow be required that the service can be reached through some custom protocol implemented on top of sockets.
Processing the incoming socket messages by immediately bus.Sending a message is also the way to go - the greatest degree of reliability is achieved by immediately doing the safest thing possible: sending a durable message.
Performing the message translation in a handler and bus.Sending the result in another message is ALSO the way to go - especially if the translation is somehow expensive and it makes sense to be able to pick up processing at this point if e.g. the web service call fails.
Making a web service call in a message handler is also be the way to go - especially if the web service call is idempotent, so it doesn't break anything if the message ever gets retried.
In other words, it sounds like the service correctly bridges a socket-based interface to a web service-based interface.
It sounds weird, however, that the service employs multiple queues to achieve this. With NServiceBus it would be entirely sufficient with one single queue: the service's input queue.
I am creating a WCF service (CALLER) for Azure. The service(CALLER) calls async methods of another third party service(EXTN). The third party service calls the callback methods of another WCF service (LISTNER) hosted by me on Azure. CALLER enter the service details in the databsae with status = PENDING.
In the callback service (LISTNER) I am updating the status of the request as COMPLETED/FAILED in the database.
But I want the CALLER should be notified when status is updated in the SQL Azure db.
I am thinking of creating a worker thread which will poll the database periodically to check the status update and notify the CALLER about this.
Is there any other better / efficient alternative to this approach?
The features you're looking for are implemented in the AppFabric service bus.
Not really. There is another way (not sure it works on azure) by using a the integrated SQL message queueing (queue on updates via trigger), and your thread could continously poll then (there is a way to have a the read WAIT for an etnry in teh queue, so you issue one and it waits), but besides that...
...no, not from the database level.
I have a similar application and I handle it by a ntification trigger OUTSIDE The database (i.e. notifications are sent from the business logic that values change).
Another option is to use Queues and have the caller poll for notification messages from the listener. The Service Bus can be used, by having the Caller subscribe to event notifications sent from the Listener. In your scenario though it doesn't provide much more than the Queues do - if you are behind the firewall, the Service Bus uses polling as well.
Queues are probably the most efficient way to send notifications - that's why they were created in the first place. The Service Bus is used to create semi-permanent connections between different services by providing a lot more features than simple message passing. That makes it a bit less flexible, requires a bit more programming. Its billing model (charge per SB connection) reflect this too. You are not expected to use a lot of SB connections.