I currently have a service running several subprocesses (with System.Diagnostics.Process). Each subprocess can run for hours and be in a specific, predefined state (think "starting", "working", "cleaning up", etc - completely predefined, no custom data attached to each state has to be reported). Each process cannot be an individual Windows service (there are more possible states than Windows service states). I need to somehow report this state to the parent service. All processes are running on the same Windows machine.
I need to be able to both query subprocess states from other processes (not the ones started by the service), and update the parent service about each subprocess states from those subprocesses. Each process gets a unique ID, so other processes can read the states easily without having to manage processes themselves. All processes share a configuration file in which each subprocess gets assigned a unique ID to identify itself with. I've thought about doing it like so:
Redirect subprocesses' standard output to the service (RedirectStandardOutput = true), read each line in the output and catch "special" lines (STATECHANGE:state)
Write out all subprocesses' states to a file in a predefined location whenever that state changes, delete that file on service exit.
It looks like I'm trying to find a solution to a problem which was solved ages ago and I haven't found that solution. Is there any "nice" way to do such state reporting?
In general, you're delving into the realm of interprocess communications, or IPC.
Though you haven't tagged this question as being specific to Microsoft Windows, it is tagged as C# and .NET, so it's probably that you are running in a Windows environment. My answer assumes you're running this system in a MS Windows.
A common solution to a problem such as this is store state in database. Each service/process could write to the database independently, and then it could be queried by any process that was interested in that information. But this isn't real two-way communication.
Regarding how the parent could communicate with the child processes, this could be done a number of ways, but it would probably be easiest if the child process ran some kind of message pump on a thread and performed data processing on another thread. The message pump would receive and respond to messages, while the data processing thread would do its thing.
Using this scheme, messages could be exchanged in a number of different ways, including:
Windows Communications Framework (WCF)
Named Pipes
.NET Remoting
MS Message Queue (MSMQ)
Windows Clipboard
Dynamic Data Exchange (DDE)
Component Object Model (COM)
Memory-mapped Files
Remote Procedure Calls (RPC)
Sockets
Since all of these processes are running on the same machine, pipes are a simple straightforward choice. Check out the System.IO.Pipes namespace
WCF allows you to build a rich messaging interface that can be implemented on top of pipes, as well as on top of other IPC mechanisms.
There are lots of good resources on the I'net that discuss interprocess communications on .NET, and rather than rehash those here, you should search these out using terms such as ".NET", "interprocess communications", "IPC" and "local machine" (since you need IPC between processes on the local machine).
Related
I have a problem, have not much experience in C #, so I did a lot of research and I'm stuck.
I have to make two applications C #, the first applications is windows forms, the second runs in the background, so that the first applications will be a (POS) sales point that need to communicate with the application background for information as (products, customers, etc ...) and send data, so do not want to use web service for problems like timeouts, so anyone can help me with some idea to perform this task?
it is important to mention that the application in background will be just one while the POS applcations wich will communicate with it will be a lot (n number of apps).
There is a myriad of ways of doing interprocess communication. As the question is so generic, I will point out some more common ways.
The background process can be a windows service which updates the DB and POS systems query the DB to retrieve what they need. Even if the background process reads from the same DB, you can have a separate table which has "finished" information ready for the POS piece to pick up. Now you can use a file instead of a DB to store this finished results too, but most folks prefer DB.
You can use WCF channel to establish communication between the POS piece and the background process.
You can convert your background process to a web-service and let your POS piece communicate using XML. I don't think any time-out issue should be a problem. You will have to explain better what time-out issue causes you to not use this option.
You can convert the whole piece into a web-site and the POS will simply be a browser then
You can use a bus like Tibco or MQ to pass data.
Or you can go the old fashioned way of TCP sockets.
The most preferred way is usually the web-servcie or web-site way depending on your constraints.
Typically you'll use a message queue for something like this. They are a component in ensuring clean separation of concerns reducing and cross-application coupling and are meant to receive messages by some publisher (thus freeing the publisher of any further responsibility), and pushing messages to some subscriber.
RabbitMQ is a popular framework: https://www.rabbitmq.com/
(note that RabbitMQ (and other ready-built frameworks) can sometimes be daunting for new application programmers as they handle a great many use cases. However the underlying concept of writing to a queue from one application and reading from the queue in the other application is really the key here... feel free to implement a small utility of your own as a learning experience, but I do recommend an pre-existing framework if you're comfortable using such)
One method is to use named pipes for such communications between different programs.
How to: Use Named Pipes for Network Interprocess Communication
If you do not want to use web service (based on soap protocol),
you could attempt to use web api. In this way, you could build rest based interfaces with json (json streaming between computers is faster than xml streaming).
I think the following link can be usefull to you:
http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/using-web-api-with-aspnet-web-forms
I have many instances of a process I've written on a server. I'd like to associate some information with each process. In this specific case I'd like to store the "CurrentState" of the process - "RUNNING|DRAINING|STOPPING", but it would be useful for me to store a "Friendly Name" and so on.
I want to query this information from another "mother" process - this mother process will query the processes running and collate the data.
I've thought of a couple of different ways I could achieve this. For example I might open up a NetPipe to each process of interest and ask for the data, or have each process broadcast it's state regularly.
I was wondering: is there a way to store key value pair information against a process built into Windows itself? Is there an accepted pattern for doing this?
I control the source for the child processes and the mother process. They are written in C#, P/Invoking is fine. The operating system is Windows 2012 R2.
You can host WCF services that use named pipes:
http://msdn.microsoft.com/en-us/library/ms733769(v=vs.110).aspx
Based on some of your comments, it looks like you could also consider the System.AddIn (aka Managed AddIn Framework (MAF)) functionality to create, host, and communicate with Add-ins. MAF supports loading addins in your app domain, a separate app domain, or in a completely separate process. The downside with MAF is that it requires 5 DLLs to get started, but in doing that gives you a lot of flexibility with API compatibility as you version and change your pipeline.
If you're controlling the data from a Mother process, you can also use AppDomains to load your other processes and communicate via Marhsaled data such as a Status class, or use the AppDomains to Set and Get data.
Be aware that any Status data you transfer needs to either be a class which derives from the Marshaling class or be marked as Serializable. The reason for this is because AppDomains are treated in the OS the same as different processes, so they can't access each others memory an actually have to serialize data as if it were being passed through IPC.
Take a look at the .Net Process Class:
http://msdn.microsoft.com/en-us/library/system.diagnostics.process(v=vs.110).aspx
You can use it to get all running processes, start a process, get the processes unique Id, and be alerted when the process exits. This should give you everything you need to track processes.
Children can call Process.GetCurrentProcess to get their own process id, then make a call to the "mother" process to associate arbitrary data about itself.
Hi have a VB6 Windows application (old.exe) and a separate C# Winforms application (new.exe). They both run on the same Windows machine.
I have access to both the VB6 and the C# source code, but the apps need to remain separate.
If both are running and have knowledge of each other (process Id), what's the best way to send a message from one window to the other?
Update:
In this case, I'm only talking about very infrequent and small messages - e.g. change the tab you're looking at using a small message like, "Invoice 67"
Bi-directional messaging would be great, but VB6 to .Net is the most important.
Neither of the two prior answers consider the fact that this may be a multi-tentant environment or even span you domain. As you move into distributed systems you shold consider messaging as opposed to inter-process communication, which over time will limit scalability.
For on-premise solutions consider MSMQ, there is a multitude of documentation out there, demonstrating the simplicity of this messaging infrastructure.
for broader scenarios, you should consider Windows Azure Storage Queues, you get an almost identical usability, but with a broader accessibility and improved management tools.
MSMQ is domain-specific, by Azure spans the globe.
Agreed with Clay's comments.
However, I'll take a stab in the dark and go with the most obvious answer:
.NET (w/ WCF) supports both IPC and Named Pipes for local intra-process communications.
Here's a link on the topic using named pipes... but it's super old, and doesn't use WCF like it should... but the point is the same: http://www.switchonthecode.com/tutorials/interprocess-communication-using-named-pipes-in-csharp Updated version using WCF: http://www.switchonthecode.com/tutorials/wcf-tutorial-basic-interprocess-communication
Here is a more-or-less complete list of IPC alternatives for Windows.
http://msdn.microsoft.com/en-us/library/aa365574%28v=vs.85%29.aspx
Most of them can be utilized from VB6 and C# as well.
The solution that I have used for this very purpose is to have TCP communications between the processes. It allows for bidirectional communication. And as a bonus, should you ever move one of the applications to a different box, your apps will continue functioning with very little changes.
In .NET, you can use a plethora of classes for this purpose (ton of stuff from low-level to high-level in System.Net). In VB6, you could go with the Winsock control that ships with the IDE. I use Dart Winsock control (costs $$$), just because it is so much more flexible.
I set up both apps to send/receive XML fragments with a known schema. There is typically an attribute that tells the other app the type of message being received, along with the payload.
A basic solution (based on the info provided):
Create a dedicated folder for incoming and outgoing messages (the one applications incoming folder will be the others outgoing folder)
Write messages (or data) into text/xml or other format to the output folder (adding a Source field so the application knows where its from)
Read the messages, based on date, and import messages/data
This allows integration to/from any application.
The Ultimate Answer To This Question
VIRTUAL NULL MODEM:
http://en.wikipedia.org/wiki/Null_modem#Virtual_null_modem
From Wikipedia:
A virtual null modem is a
communication method to connect two
computer applications directly using a
virtual serial port. Unlike a null
modem cable, a virtual null modem is a
software solution which emulates a
hardware null modem within the
computer. All features of a hardware
null modem are available in a virtual
null modem as well. There are some
advantages to this:
Higher transmission speed of serial data (limited by computer performance
only). Virtual connection over network or Internet is possible, mitigating cable
length restrictions.
An unlimited number of virtual connections is possible.
No serial cable is needed.
The computer's physical serial ports remain free.
For instance, DOSBox has allowed older
DOS games to use virtual null modems.
Another common example consists of
unix pseudo terminals (pty) which
present a standard tty interface to
user applications, including virtual
serial controls. Two such ptys may
easily be linked together by an
application to form a virtual null
modem communication path.
With the HIGHLIGHT of this solution being: IT REQUIRES NO CABLES!!!
*Note; This is an attempt at humor. Forgive me if it's not funny.
So we have this somewhat unusual need in our product. We have numerous processes running on the local host and need to construct a means of communication between them. The difficulty is that ...
There is no 'server' or master process
Messages will be broadcast to all listening nodes
Nodes are all Windows processes, but may be C++ or C#
Nodes will be running in both 32-bit and 64-bit simultaneously
Any node can jump in/out of the conversation at any time
A process abnormally terminating should not adversely affect other nodes
A process responding slowly should also not adversely affect other nodes
A node does not need to be 'listening' to broadcast a message
A few more important details...
The 'messages' we need to send are trivial in nature. A name of the type of message and a single string argument would suffice.
The communications are not necessarily secure and do not need to provide any means of authentication or access control; however, we want to group communications by a Windows Log-on session. Perhaps of interest here is that a non-elevated process should be able to interact with an elevated process and vise-versa.
My first question: is there an existing open-source library?, or something that can be used to fulfill this with little effort. As of now I haven't been able to find anything :(
If a library doesn't exist for this then... What technologies would you use to solve this problem? Sockets, named-pipes, memory mapped files, event handles? It seems like connection based transports (sockets/pipes) would be a bad idea in a fully connected graph since n nodes requires n(n-1) number of connections. Using event handles and some form of shared storage seems the most plausible solution right now...
Updates
Does it have to be reliable and guaranteed? Yes, and no... Let's say that if I'm listening, and I'm responding in a reasonable time, then I should always get the message.
What are the typical message sizes? less than 100 bytes including the message identifier and argument(s). These are small.
What message rate are we talking about? Low throughput is acceptable, 10 per second would be a lot, average usage would be around 1 per minute.
What are the number of processes involved? I'd like it to handle between 0 and 50, with the average being between 5 and 10.
I don't know of anything that already exists, but you should be able to build something with a combination of:
Memory mapped files
Events
Mutex
Semaphore
This can be built in such a way that no "master" process is required, since all of those can be created as named objects that are then managed by the OS and not destroyed until the last client uses them. The basic idea is that the first process to start up creates the objects you need, and then all other processes connect to those. If the first process shuts down, the objects remain as long as at least one other process is maintaining a handle to them.
The memory mapped file is used to share memory among the processes. The mutex provides synchronization to prevent simultaneous updates. If you want to allow multiple readers or one writer, you can build something like a reader/writer lock using a couple of mutexes and a semaphore (see Is there a global named reader/writer lock?). And events are used to notify everybody when new messages are posted.
I've waved my hand over some significant technical detail. For example, knowing when to reset the event is kind of tough. You could instead have each app poll for updates.
But going this route will provide a connectionless way of sharing information. It doesn't require that a "server" process is always running.
For implementation, I would suggest implementing it in C++ and let the C# programs call it through P/Invoke. Or perhaps in C# and let the C++ apps call it through COM interop. That's assuming, of course, that your C++ apps are native rather than C++/CLI.
I've never tried this, but in theory it should work. As I mentioned in my comment, use a UDP port on the loopback device. Then all the processes can read and write from/to this socket. As you say, the messages are small, so should fit into each packet - may be you can look at something like google's protocol buffers to generate the structures, or simply mem copy the structure into the packet to send and at the other end, cast. Given it's all on the local host, you don't have any alignment, network order type issues to worry about. To support different types of messages, ensure a common header which can be checked for type so that you can be backward compatible.
2cents...
I think one more important consideration is performance, what message rate are we talking about and no. of processes?
Either way you are relying on a "master" that allows the communication needs, be it a custom service or a system provided(Pipes, Message Queue and such).
If you don't need to keep track and query for past messages, I do think you should consider a dead simple service that opens a named Pipe - allowing all other processes to either read or write to it as PipeClients. If I am not mistaken it checks on all items in your list.
What your looking for is Mailslots!
See CreateMailslot:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365147(v=vs.85).aspx
I need to process large image files into smaller image files. I would like to distribute the work to many "slave" servers, rather than tasking my main server with this. I am using Windows Server 2005/2008, C#, and ASP.NET. I have a lot of web application development experience but have not developed distributed systems. I had a notion that this could be designed as follows:
1) Files would be placed in a shared network drive
2) Slave servers would periodically poll the drive for new content
3) Slave servers would rename newly found files to something like UNPROCESSED_appIDXXXX_jidXXXXX_photoidXXXXX.tif and begin processing that file.
4) Other slave servers would avoid trying to process files that are in process by examining file name, i.e. if something has been named "UNPROCESSED" they will not attempt to process.
I am wondering a few things:
1) Will there be issues with two slave servers trying to "grab" and rename the file at once, or will Windows Server automatically lock the file?
2) What do you think the best mechanism for notification of new content for processing should be? One simple idea is to write a basic aspx page on each slave system and have it running on a timer. A better idea might be to write a Windows Service that utilizes SystemFileWatcher and have it running on each slave system. A third idea is to have a central server somehow dispatch instructions to a given slave server to attempt a processing job, but I do not know of ways of invoking that kind of communication beyond a very hack-ish approach of having the master server pass a message via HTTP.
I'd much appreciate any guidance you have to offer.
Cheers,
-KF
If you don't want to go all the way with a compute cluster type solution. You should consider having a job manager running somewhere that will parcel out the work. That way, when a server becomes available to do work, it asks the job manager for a new bit of work to do. It can then tell the job manager that it's finished and the job manager can inform your "client" when the work on the whole job is complete. That way, it's easy to register work and know it's complete and the job manager can parcel out the work without the worry of race conditions on file renames. :)