Finding or building an inter-process broadcast communication channel - c#

So we have this somewhat unusual need in our product. We have numerous processes running on the local host and need to construct a means of communication between them. The difficulty is that ...
There is no 'server' or master process
Messages will be broadcast to all listening nodes
Nodes are all Windows processes, but may be C++ or C#
Nodes will be running in both 32-bit and 64-bit simultaneously
Any node can jump in/out of the conversation at any time
A process abnormally terminating should not adversely affect other nodes
A process responding slowly should also not adversely affect other nodes
A node does not need to be 'listening' to broadcast a message
A few more important details...
The 'messages' we need to send are trivial in nature. A name of the type of message and a single string argument would suffice.
The communications are not necessarily secure and do not need to provide any means of authentication or access control; however, we want to group communications by a Windows Log-on session. Perhaps of interest here is that a non-elevated process should be able to interact with an elevated process and vise-versa.
My first question: is there an existing open-source library?, or something that can be used to fulfill this with little effort. As of now I haven't been able to find anything :(
If a library doesn't exist for this then... What technologies would you use to solve this problem? Sockets, named-pipes, memory mapped files, event handles? It seems like connection based transports (sockets/pipes) would be a bad idea in a fully connected graph since n nodes requires n(n-1) number of connections. Using event handles and some form of shared storage seems the most plausible solution right now...
Updates
Does it have to be reliable and guaranteed? Yes, and no... Let's say that if I'm listening, and I'm responding in a reasonable time, then I should always get the message.
What are the typical message sizes? less than 100 bytes including the message identifier and argument(s). These are small.
What message rate are we talking about? Low throughput is acceptable, 10 per second would be a lot, average usage would be around 1 per minute.
What are the number of processes involved? I'd like it to handle between 0 and 50, with the average being between 5 and 10.

I don't know of anything that already exists, but you should be able to build something with a combination of:
Memory mapped files
Events
Mutex
Semaphore
This can be built in such a way that no "master" process is required, since all of those can be created as named objects that are then managed by the OS and not destroyed until the last client uses them. The basic idea is that the first process to start up creates the objects you need, and then all other processes connect to those. If the first process shuts down, the objects remain as long as at least one other process is maintaining a handle to them.
The memory mapped file is used to share memory among the processes. The mutex provides synchronization to prevent simultaneous updates. If you want to allow multiple readers or one writer, you can build something like a reader/writer lock using a couple of mutexes and a semaphore (see Is there a global named reader/writer lock?). And events are used to notify everybody when new messages are posted.
I've waved my hand over some significant technical detail. For example, knowing when to reset the event is kind of tough. You could instead have each app poll for updates.
But going this route will provide a connectionless way of sharing information. It doesn't require that a "server" process is always running.
For implementation, I would suggest implementing it in C++ and let the C# programs call it through P/Invoke. Or perhaps in C# and let the C++ apps call it through COM interop. That's assuming, of course, that your C++ apps are native rather than C++/CLI.

I've never tried this, but in theory it should work. As I mentioned in my comment, use a UDP port on the loopback device. Then all the processes can read and write from/to this socket. As you say, the messages are small, so should fit into each packet - may be you can look at something like google's protocol buffers to generate the structures, or simply mem copy the structure into the packet to send and at the other end, cast. Given it's all on the local host, you don't have any alignment, network order type issues to worry about. To support different types of messages, ensure a common header which can be checked for type so that you can be backward compatible.
2cents...

I think one more important consideration is performance, what message rate are we talking about and no. of processes?
Either way you are relying on a "master" that allows the communication needs, be it a custom service or a system provided(Pipes, Message Queue and such).
If you don't need to keep track and query for past messages, I do think you should consider a dead simple service that opens a named Pipe - allowing all other processes to either read or write to it as PipeClients. If I am not mistaken it checks on all items in your list.

What your looking for is Mailslots!
See CreateMailslot:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365147(v=vs.85).aspx

Related

C#/.NET: Reporting subprocess state to the parent service

I currently have a service running several subprocesses (with System.Diagnostics.Process). Each subprocess can run for hours and be in a specific, predefined state (think "starting", "working", "cleaning up", etc - completely predefined, no custom data attached to each state has to be reported). Each process cannot be an individual Windows service (there are more possible states than Windows service states). I need to somehow report this state to the parent service. All processes are running on the same Windows machine.
I need to be able to both query subprocess states from other processes (not the ones started by the service), and update the parent service about each subprocess states from those subprocesses. Each process gets a unique ID, so other processes can read the states easily without having to manage processes themselves. All processes share a configuration file in which each subprocess gets assigned a unique ID to identify itself with. I've thought about doing it like so:
Redirect subprocesses' standard output to the service (RedirectStandardOutput = true), read each line in the output and catch "special" lines (STATECHANGE:state)
Write out all subprocesses' states to a file in a predefined location whenever that state changes, delete that file on service exit.
It looks like I'm trying to find a solution to a problem which was solved ages ago and I haven't found that solution. Is there any "nice" way to do such state reporting?
In general, you're delving into the realm of interprocess communications, or IPC.
Though you haven't tagged this question as being specific to Microsoft Windows, it is tagged as C# and .NET, so it's probably that you are running in a Windows environment. My answer assumes you're running this system in a MS Windows.
A common solution to a problem such as this is store state in database. Each service/process could write to the database independently, and then it could be queried by any process that was interested in that information. But this isn't real two-way communication.
Regarding how the parent could communicate with the child processes, this could be done a number of ways, but it would probably be easiest if the child process ran some kind of message pump on a thread and performed data processing on another thread. The message pump would receive and respond to messages, while the data processing thread would do its thing.
Using this scheme, messages could be exchanged in a number of different ways, including:
Windows Communications Framework (WCF)
Named Pipes
.NET Remoting
MS Message Queue (MSMQ)
Windows Clipboard
Dynamic Data Exchange (DDE)
Component Object Model (COM)
Memory-mapped Files
Remote Procedure Calls (RPC)
Sockets
Since all of these processes are running on the same machine, pipes are a simple straightforward choice. Check out the System.IO.Pipes namespace
WCF allows you to build a rich messaging interface that can be implemented on top of pipes, as well as on top of other IPC mechanisms.
There are lots of good resources on the I'net that discuss interprocess communications on .NET, and rather than rehash those here, you should search these out using terms such as ".NET", "interprocess communications", "IPC" and "local machine" (since you need IPC between processes on the local machine).

What APIs can I use for my specific bandwidth throttling requirements? [duplicate]

I would like to make the following happen:
My application runs on a Windows machine (call it application A).
I can modify the source code of application A to introduce bandwidth throttling.
I would like to be able to reuse my bandwidth throttling code and drop it into any other applications that I have (in other words, I would like to try and throttle the bandwidth on an application domain level in order to not have to re-factor existing applications for bandwidth throttling).
I want to throttle A's cumulative upload and download speed separately. For example, if A has a maximum of 5 Kbps allotted to upload, then all of A's upload streams will be capped to a cumulative amount of 5 Kbps.
My requirements:
I cannot use a kernel-mode driver.
I need to add throttling on an application domain level.
I have tried to research into this, especially on Stack Overflow but could not find anything useful for my case:
I have seen this example of using a ThrottledStream class wrapper around a Stream object that will introduce throttling when used, but I need this to be at a domain level; taking this approach is problematic because it would require me to refactor a lot of existing code in other applications.
I have seen this question who's answer speaks about using the Windows Filtering Platform API. Unfortunately, a requirement I have is that I absolutely can't use a kernel-mode driver to accomplish this, and my understanding is that the WFP API requires one.
Does anyone know a way to implement my specific bandwidth throttling requirements in order to throttle applications on an application domain level?
I think I have found a solution. With the QOS API, you need to get a handle to your target interface using TcOpenInterface (you can figure out which interface you want to target via a call to TcEnumerateInterfaces). With your interface handle, you need to call TcAddFlow along with a pointer to a TC_GEN_FLOW structure, which allows you to specify both a SendingFlowspec (FLOWSPEC structure) and a ReceivingFlowspec (FLOWSPEC structure) which contains a PeakBandwidth member. Then, to make your interface utilize this flow you've just added to it, you need to add a filter to your interface using a call to TcAddFilter, as MSDN says that the TcAddFilter function associates a new filter with an existing flow that allows packets matching the filter to be directed to the associated flow. I think that to make it application specific, calling TcRegisterClient may do the trick, which you will need to call anyways in order to get a client handle to use with TcEnumerateInterfaces and TcAddFlow from the looks of it (but this remains to be tested). I found this useful example as well (haven't tested it).
Taken from MSDN, the PeakBandwidth member is the upper limit on time-based transmission permission for a given flow, in bytes per second. The PeakBandwidth member restricts flows that may have accrued a significant amount of transmission credits, or tokens from overburdening network resources with one-time or cyclical data bursts, by enforcing a per-second data transmission ceiling. Some intermediate systems can take advantage of this information, resulting in more efficient resource allocation.

Multithreading with Filesystem watcher and/or MSMQ wcf service

I need to create a service which is basically responsible for the following:
Watch a specific folder for any new files created.
If yes , read that file , process it and save data in DB.
For the above task, I am thinking of creating a multi threaded service with either of the following approach:
In the main thread, create an instance of filesystem watcher and as soon as a new file is created, add that file in the threadQueue. There will be N no. of consumer threads running which should take a file from the queue and process it (i.e step 2).
Again in the main thread, create an instance of filesystem watcher and as soon as a new file is created, read that file and add the data to MSMQ using wcf MSMQ service. When the message is read by the wcf msmq service, it will be responsible for processing further
I am a newbie when it comes to creating a multi threaded service. So not sure which will tbe the best option. Please guide me.
Thanks,
First off, let me say that you have taken a wise approach to do a single producer - multiple consumer model. This is the best approach in this case.
I would go for option 1, using a ConcurrentQueue data structure, which provides you an easy way to queue tasks in a thread-safe manner. Alternatively, you can simply use the ThreadPool.QueueUserWorkItem method to send work directly to the built-in thread pool, without worrying about managing the workers or the queue explicitly.
Edit: Regarding the reliability of FileSystemWatcher, MSDN says:
The Windows operating system notifies your component of file changes
in a buffer created by the FileSystemWatcher. If there are many
changes in a short time, the buffer can overflow. This causes the
component to lose track of changes in the directory, and it will only
provide blanket notification. Increasing the size of the buffer with
the InternalBufferSize property is expensive, as it comes from
non-paged memory that cannot be swapped out to disk, so keep the
buffer as small yet large enough to not miss any file change events.
To avoid a buffer overflow, use the NotifyFilter and
IncludeSubdirectories properties so you can filter out unwanted change
notifications.
So it depends on how often changes will occur and how much buffer you are allocating.
I would also consider your demands for failure handling and sizes of the files you are sending.
Whether you decide for option 1 or 2 will be dependent on specifications.
Option 2 has the avantage that by using MSMQ you have your data persisted in a recoverable way, even if you may need to restart your machine. Option 1 only has your data in memory which might get lost.
On the other hand, option 2 has a disadvantage that the message size of MSMQ is limited to 4 MB per message (explanation in a Microsoft blog here) and therefore only half of it when working with unicode characters, while the in-memory queues are capaple of much bigger sizes.
[Edit]
Thinking a bit longer, I would prefer option 2.
In your comment, you mention that you want to move files around in the filesystem. This can be very expensive in regards to performance, even worse if you move the files between different partions.
I have used the MSQM in multiple projects at work and am convinced that it would work well for what you want to do. A big advantage here would be that the MSMQ works with transactional communications. That means, that if for some reason a network or electricity or whatever failure occurs, neither your message nor your files get lost.
If any of those happen while you move a file around it could easily get corrupted.
Only thing I have grumbles in my stomach is the file sizes. To work around the message size limitations of 4 MB (see added link above), I would not put the file content into a message. Instead. I would only send an ID or a filepath with it so that the consuming service can find it and read it when needed.
This keeps the message and queue sizes small and avoids using too much bandwith or memory in network and on your serve(s).

BizTalk server problem

we have a biztalk server (a virtual one (1!)...) at our company, and an sql server where the data is being kept.
Now we have a lot of data traffic. I'm talking about hundred of thousands. So I'm actually not even sure if one server is pretty safe, but our company is not that easy to convince.
Now recently we have a lot of problems.
Allow me to situate in detail, so I'm not missing anything:
Our server has 5 applications:
One with 3 orchestrations, 12 send ports, 16 receive locations.
One with 4 orchestrations, 32 send ports, 20 receive locations.
One with 4 orchestrations, 24 send ports, 20 receive locations.
One with 47 (yes 47) orchestrations, 37 send ports, 6 receive locations.
One with common application with a couple of resources.
Our problems have occured since we deployed the applications with the 47 orchestrations.
A lot of these orchestrations use assign shapes which use c# code to do the mapping. This is because we use HL7 extensions and this is kind of special, so by using c# code & xpath it was a lot easier to do the mapping because a lot of these schema's look alike. The c# reads in XmlNodes received through xpath, and returns XmlNode which are then assigned again to biztalk messages. I'm not sure if this could be the cause, but I thought I'd mention it.
The send and receive ports have a lot of different types: File, MQSeries, SQL, MLLP, FTP.
Each of these types have a different host instances, to balance out the load.
Our orchestrations use the BiztalkApplication host.
On this server also a couple of scripts are running, mostly ftp upload scripts & also a zipper script, which zips files every half an hour in a daily zip and deletes the zip files after a month. We use this zipscript on our backup files (we backup a lot, backups are also on our server), we did this because the server had problems with sending files to a location where there were a lot (A LOT) of files, so after the files were reduced to zips it went better.
Now the problems we are having recently are mainly two major problems:
Our most important problem is the following. We kept a receive location with a lot of messages on a queue for testing. After we start this receive location which uses the 47 orchestrations, the running service instances start to sky rock. Ok, this is pretty normal. Let's say about 10000, and then we stop the receive location to see how biztalk handles these 10000 instances. Normally they would go down pretty fast, and it does sometimes, but after a while it starts to "throttle", meaning they just stop being processed and the service instances stay at the same number, for example in 30 seconds it goes down from 10000 to 4000 and then it stays at 4000 and it lowers very very very slowly, like 30 in 5minutes or something. So this means, that all the other service instances of the other applications are also stuck in here, and they are also not processed.
We noticed that after restarting our host instances the instance number went down fast again. So we tried to selectively restart different host instances to locate the problem. We noticed that eventually restarting the file send/receive host instance would do the trick. So we thought file sends would be the problem. Concidering that we make a lot of backups. So we replaced the file type backups with mqseries backups. The same problem occured, and funny thing, restarting the file send/receive host still fixes the problem.
No errors can be found in the event viewer either.
A second problem we're having is. That sometimes at arround 6 am, all or a part of the host instances are being stopped.
In the event viewer we noticed the following errors (these are more than one):
The receive location "MdnBericht SQL" with URL "SQL://ZNACDBPEG/mdnd0001/" is shutting down. Details:"The error threshold has been exceeded. The receive location is shutting down.".
The Messaging Engine failed to add a receive location "M2m Othello Export Start Bestand" with URL "\m2mservices\Othello_import$\DataFilter Start*.xml" to the adapter "FILE". Reason: "The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start.
Verify this folder exists.
Error: Logon failure: unknown user name or bad password.
".
The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start.
Verify this folder exists.
Error: Logon failure: unknown user name or bad password.
An attempt to connect to "BizTalkMsgBoxDb" SQL Server database on server "ZNACDBBTS" failed.
Error: "Login failed for user ''. The user is not associated with a trusted SQL Server connection."
It woould seem that there's a login failure at this time and that because of it other services are also experiencing problems, and eventually they are shut down.
The thing is, our user is admin, and it's impossible that it's password is wrong "sometimes". We have concidering that the problem could be due to an infrastructure problem, but that's not really are department.
I know it's a long post, but we're not sure anymore what to do. Would adding another server and balancing the load solve our problems? Is there a way to meassure our balance and know where to start splitting? What are normal numbers of load etc?
I appreciate any answers because these issues are getting worse and we're also on a deadline.
Thanks a lot for replies!
Your immediate problem is BizTalk throttling feature. It's supposed to help BizTalk survive temporary overload conditions. One of its many problems is that you can see the throttling kick-in only in the performance monitor and not in the event log.
What you should do:
Separate the new application to a different host than the rest of the applications. Throttling is done in the host level. So the problematic application wont affect the rest of the applications.
Read about how to disable throttling in the link above.
What we have done is implementing an external throttling service. That feed the BizTalk receive location in small digestible packets. Its ugly, but the problem is ugly.
Update to comment: You have enough host instances. So Ignore that advice. You may reorder the applications between the instances. But there are no clear guidelines to do that. So its just shuffling and guessing.
About the safeness of disabling throttling. This feature doesn't make much sense in many scenarios. You have to study it. Check which of the throttling parameters you are hitting (this can be seen in the performance monitor) and decide how to change the thresholds.
How many host instances do you have?
From the line:
The send and receive ports have a lot
of different types: File, MQSeries,
SQL, MLLP, FTP. Each of these types
have a different host instances, to
balance out the load. Our
orchestrations use the
BiztalkApplication host
It sounds like you have a lot - I recently did an audit of a system where BizTalk was self throttling and the issue was in part due to too many host instances. Each host instance places its own load upon the BizTalk messagebox, as well as chewing up a minimum of 200mb memory.
Reading your comment, you have 20 - this is too many and would be a big part of your problems.
A good starting host setup would be:
A dedicated tracking host
One host that contains all receive handlers for adapters
One host that contains all orchestrations
One host that contains all send handlers for adapters
One host for adapters that need to be clustered (like FTP and MSMQ)
You can then also consider things like introducing "real time" hosts and batched hosts, so you can tune the real time hosts for low latency.
You can also have hosts for specific applications if there are known to be unstable, but in general this should not be done.
I run a BizTalk system that has similar problems and can empathize with what you are seeing. I don't know if it's the same, but I thought I'd share my experience in case.
In the same manner restarting the send/receive seems to fix the problem. In my case I found a direct correlation to memory usage by the host processes. I used performance counters to see when a given host was throttled for memory. By creating extra hosts, and moving orchestrations and ports between them I was able to narrow down which business sets were causing the problem. Basically in my case restarting the hosts was the equivalent to the ultimate "garbage collection" to free up memory. This was of course until enough instances came through to gobble it up again.
I'm afraid I have not solved the issue yet, but a few things I found to alleviate the issue:
Raise the memory to a given process so that throttling does not occur or occurs later
Each host instance, while informative, does have an overhead that is added. Try combining hosts that are not your problem children together to reduce the memory foot print.
Throw hardware at the problem, ram is cheap
I measure the following every few minutes in perfmon so I can diagnose where the problem is:
BizTalk:MessageAgent(*)\Process memory usage (MB)
BizTalk:MessageAgent(*)\Process memory usage threshold
Memory\Available MBytes
A few other things to take a look at. Make sure any custom pipelines use good BizTalk memory practices (i.e. no XML DOM manipulation hiding somewhere, etc). Also theoretically reducing the number of threads for a given host should lower the amount of memory it can seize at one time. I did not seem to have much luck with this one. Maybe the BizTalk throttling overrode it as others have mentioned, I don't know. Also, on a final note, if you dump the perfmon results to a csv, with Excel you can make some pretty memory usage graphs. These might be useful for talking to management about buying more hardware. That's assuming your issue fits this scenario as well.
We fixed the problem temporarily due to a combination of all ur answers.
We set the process memory usage throttling parameters of some hosts higher.
We divided the balance of the host instances better after I analyzed all the memory usage of all hosts, thanks to performance counters and also with the use of a tool called MsgBoxViewer.
And now we're trying to get more physical memory & hopefully also an extra server or a 64bit server.
Thanks for all replies!
We recently installed a 64-bit server in cluster with our older server. Thanks to this we can balance the memory even better which solved a lot of problems.
Although the 64-bit didn't give us much improvements (except for a bit more memory) since it can't use 64-bits on IBM MQ's, MLLP's, HL7 pipelines etc...
The other answers are helpful for run-time performance tuning, but i would recommend a design change as well.
You say that you do a lot of message manipulation in the orchestration in the message assignment shapes.
I would recommend moving that code to dedicated transforms. They are much more light weight, and can be executed faster. You can combine custom xslt and c# in these maps to do the hard work. Orchestrations cost more in development, design and testing, and a whole lot more in run-time performance.
You can then use transforms for message transformation, and leave the orchestrating (what is left of it after moving the message assignment code) to the orchestrations.
The added benefit of using transforms over orchestrations is that they are much more testable.

Serial Comms programming structure in c# / net /

I'm an embedded programmer trying to do a little bit of coding for a communications app and need a quick start guide on the best / easiest way to do something.
I'm successfully sending serial data packets but need to impliment some form of send/ response protocol to avoid overflow on the target system and to ensure that the packet was received ok.
Right now - I have all the transmit code under a button click and it sends the whole lot without any control.
What's the best way to structure this code , i.e sending some packets - waiting for response .. sending more .. etc etc until it's all done, then carrying on with the main program.
I've not used threads or callbacks or suchlike in this environment before but will learn - I just need a pointer to the most straigtforward ways to do it.
Thanks
Rob
The .NET serialport uses buffers, learn to work with them.
Sending packets that are (far) smaller than the Send-buffer can be done w/o threading.
Receiving can be done by the DataReceived event but beware that that is called from another thread. You might as well start your own thread and use blocking reads from there.
The best approach depends on what your 'packets' and protocol look like.
I think to have a long experience about serial comm, both MCU and PC-based.
I strongly UNSUGGEST the single-thread based solution, although it is very straigthful for light-speed testing, but absolutely out for final releases.
Surely you may choose among several patterns, but they are mostly shaped around a dedicated thread for the comm process and a finite-state-machine to parse the protocol (during receiveing).
The prevoius answers give you an idea to how build a simple program, but it might depends on the protocol specification, target device, scope of the application, etc.
there are of course different ways.
I will describe a thread based and an async operation based way:
If you don't use threads, your app will block as long as the operation is performing. This is not what a user is expecting today. Since you are talking about a series of sending and receiveing commands, I would recommend starting the protocol as a thread and then waiting for it to finish. You might also place an Abort button if neccesary. Set the ReadTimeout values and at every receive be ready to catch the exception! An introducing into creating such a work thread is here
If you want to, use Async Send/Receive functions instead of a thread (e.g. NetworkStream.BeginRead etc.). But this is more difficult because you have to manage state between the calls: I recommend using a Finite State Machine then. In fact you create an enumeration (i.e. ProtocolState) and change the state whenever an operation has completed. You can then simply create a function that performs the next step of the protocol with a simple switch/case statement. Since you are working with a remote entity (in your case the serial target system), you always have to consider the device is not working or stops working during the protocol. Do this by starting a timeout timer (e.g. set to 2000ms) and start it after sending each command (assuming each command will get a reply in your protocol). Stop it if the command was received successfully or on timeout.
You could also implement low-level handshaking on the serial port; set the serial port's Handshake property to rts/cts or xon/xoff.
Otherwise (or in addition), use a background worker thread. For simple threads, I like a Monitor.Wait/Pulse mechanism for managing the thread.
I have some code that does read-only serial communications in a thread; email me and I'll be happy to send it to you.
I wasn't sure from your question if you were designing both the PC and embedded sides of the communication link, if you are you might find this SO question interesting.

Categories