Is it possible to write custom events that can be handled by 3rd party applications?
We have an existing app and we're finding that many people that use the app are using sql triggers to custom-write functionality of their own certain when things happen in our app.
This has led to some instances where our own processes have slowed down due to shoddy 3rd party Triggers that block our app.
I was thinking we could make this easier for 3rd party devs if we could raise events that they could handle in their own services or apps instead of having to use triggers.
That way we'd lose the blocking because we can just fire the event and continue. Also their slowness/potential crashes would happen outside of our process.
A) is this a reasonable approach?
B) Is this possible? Can I scope an event beyond the scope of my app?
EDIT
I have since found other related questions to be of interest:
wcf cross application communication
Interprocess pubsub without network dependency
Listen for events in another application (This seems very close to what I'm after)
I guess I'm looking for the simplest approach but if we wanted to adopt this method across a number of other apps within our company we'd have some further challenges:
We have some older apps in vb6 and delphi - from those I'd just like to be able to listen for their events in my (or 3rd party) newer C# apps or services.
For now, I'll look at:
Managed Spy and http://pubsub.codeplex.com
No, events are only usable by code that's loaded into your own process. If you don't trust these people now, you really don't want to expose yourself to shoddy code that you load into your own process and throws unhandled exceptions that terminate your app. You'll get the phone call, not them. Besides, they'll use such an event to run code that slows down your app.
In general, anything you do with a dbase will run with an entirely unpredictable amount of overhead. Not just because of triggers added by others, the dbase server could simply be bogged down by other work and naturally slow down over time as it stores more and more data. Make sure that doesn't make your app difficult or unpleasant to use, dbase operations typically must run on a worker thread or be done asynchronously with, say, BeginExecuteXxxx(). And make it obvious in your UI that progress is stalled by the dbase server, not by any code that you wrote. Saves you from having to do a lot of explaining.
What you're looking to do is basically to send messages to other processes. For this, you need some sort of IPC mechanism. Since it sounds like you'll have multiple listeners to each message, a mailslot is probably the best way. Unfortunately, .NET doesn't have built-in support for mailslots, so you'll have to use P/Invoke.
If you're looking for a built-in solution, then you could use named pipes, WCF, .NET Remoting, or bare TCP or UDP. With any of these, though, you'll have to loop through all of your listeners and send the message one at a time to each of them, which is not that big of a deal, but maintaining the separate connections is a little more of a hassle.
Note that with WCF and .NET Remoting, you're pretty much limiting your clients to using .NET as well. If your clients might be native or some other platform, then mailslots, named pipes, and TCP/IP are your best bet.
Related
I have a problem, have not much experience in C #, so I did a lot of research and I'm stuck.
I have to make two applications C #, the first applications is windows forms, the second runs in the background, so that the first applications will be a (POS) sales point that need to communicate with the application background for information as (products, customers, etc ...) and send data, so do not want to use web service for problems like timeouts, so anyone can help me with some idea to perform this task?
it is important to mention that the application in background will be just one while the POS applcations wich will communicate with it will be a lot (n number of apps).
There is a myriad of ways of doing interprocess communication. As the question is so generic, I will point out some more common ways.
The background process can be a windows service which updates the DB and POS systems query the DB to retrieve what they need. Even if the background process reads from the same DB, you can have a separate table which has "finished" information ready for the POS piece to pick up. Now you can use a file instead of a DB to store this finished results too, but most folks prefer DB.
You can use WCF channel to establish communication between the POS piece and the background process.
You can convert your background process to a web-service and let your POS piece communicate using XML. I don't think any time-out issue should be a problem. You will have to explain better what time-out issue causes you to not use this option.
You can convert the whole piece into a web-site and the POS will simply be a browser then
You can use a bus like Tibco or MQ to pass data.
Or you can go the old fashioned way of TCP sockets.
The most preferred way is usually the web-servcie or web-site way depending on your constraints.
Typically you'll use a message queue for something like this. They are a component in ensuring clean separation of concerns reducing and cross-application coupling and are meant to receive messages by some publisher (thus freeing the publisher of any further responsibility), and pushing messages to some subscriber.
RabbitMQ is a popular framework: https://www.rabbitmq.com/
(note that RabbitMQ (and other ready-built frameworks) can sometimes be daunting for new application programmers as they handle a great many use cases. However the underlying concept of writing to a queue from one application and reading from the queue in the other application is really the key here... feel free to implement a small utility of your own as a learning experience, but I do recommend an pre-existing framework if you're comfortable using such)
One method is to use named pipes for such communications between different programs.
How to: Use Named Pipes for Network Interprocess Communication
If you do not want to use web service (based on soap protocol),
you could attempt to use web api. In this way, you could build rest based interfaces with json (json streaming between computers is faster than xml streaming).
I think the following link can be usefull to you:
http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/using-web-api-with-aspnet-web-forms
I have a C++ application that needs to communicate to a C# application (a windows service) running on the same machine. I want the C++ application to be able to write as many messages as it wants, without knowing or caring when/if the C# app is reading them, or even if it's running. The C# app be able to should just wake up every now and then and request the latest messages, even if the C++ app has been shut down.
What is the simplest way to achieve this? I think this kind of thing is what MSMQ is for, but I haven't found a good way to do it in C++. I'm using Named Pipes right now, but that's not really working out as the way I'm doing it requires a connection between the two apps, and the C++ call to WriteLine blocks until the read takes place.
Currently the best solution I can think of is just writing the messages to a file with a timestamp on each message that the C# application checks periodically against its last update timestamp. That seems a little crude, though.
What is the simplest way to achieve this sort of messaging?
I would use a named pipe.
Well, the simplest way actually is using a file to store the messages. I would suggest using an embedded database like SQLite, though: the advantage will be better performance and a nice way to query for changes (i.e. SELECT * FROM messages WHERE timestamp > last_app_start).
MSMQ definitely sounds like what you want, or the more basic reading and writing files written to a common area but then you need to watch contention on the files.
VC++ help on MSMQ.
The requirement of both apps not always running at the same time but still being able to message each other definitely means you need a third component to store/queue messages. Whether you use a shared database/file or you write a third app that acts as a message store is up to you. Either way you will find sharing always causes contention.
Personally I would look at 0MQ before MSMQ but neither will solve your problem as is. An sqlite database would be my first choice.
I have been working on many applications which run as windows service or scheduled tasks.
Now, i want to make sure that these applications will be fault tolerant and reliable. For example; i have a service that runs every hour. if the service crashes while its operating or running, i d like the application to run again for the same period (there are several things involved with this including transactions of data processing) , to avoid data loss. moreover, i d like the program to report the error with details. My goal is to avoid data loss and not falling behind for running the program.
I have built a class library that a user can import into a project. Library is supposed to keep information of running instance of the program, ie. program reads and writes information of running interval, running status etc. This data is stored in a database.
I was curious, if there are some best practices to make the scheduled tasks/ windows services fault tolerant and reliable.
Edit : I am talking about independent tasks or services which on different servers. and my goal is to make sure that the service will keep running, report any failures and recover from them.
I'm interested in what other people have to say, but I'll give you a few points that I've stumbled across:
Make an event handler for Unhandled Exceptions. This way you can clean up resources, write to a log file, email an administrator, or anything you need to instead of having it crash.
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(AppUnhandledExceptionEventHandler);
Override any servicebase event handlers you need in the main part of your application. OnStart and OnStop are pretty crucial, but there are many others you can use. http://msdn.microsoft.com/en-us/library/system.serviceprocess.servicebase%28v=VS.71%29.aspx
Beware of timers. Windows forms timers won't work right in a service. User System.Threading.Timers or System.Timers.Timer. Best Timer for using in a Windows service
If you are updating on a thread, make sure you use a lock() or monitor in key sections to make sure everything is threadsafe.
Be careful not to use anything user specific, as a service runs without a specific user context. I noticed some of my SQL connection strings were no longer working for windows authorizations, etc. Also have heard people having trouble with mapped drives.
Never make a service with a UI. In fact for Vista and 7 they make it nearly impossible to do anyway. It shouldn't require user interaction, the most you can do is send a message with a WIN32 function. MSDN claims making interactive services is bad practice. http://msdn.microsoft.com/en-us/library/ms683502%28VS.85%29.aspx
For debugging purposes, it is way cool to make a service run as a console application until you get it doing what you want it to. Awesome tutorial: http://mycomponent.blogspot.com/2009/04/create-debug-install-windows-service-in.html
Anyway, hope that helps a little, but that is just a couple thing I poked around to find on my own.
Something obvious - don't run all your tasks at the same time. Try to schedule them so only one task is using some expensive resource at any time (if possible). For example, if you need to send out newsletters and some specific notifications, schedule them at different times. If two tasks need to clean up something in the database, let the one run after another.
Also schedule tasks to run outside of normal business hours - at night obviously.
I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.
Suppose I have two applications written in C#. The first is a third party application that raises an event called "OnEmailSent".
The second is a custom app that I've written that I would like to somehow subscribe to the "OnEmailSent" even of the first application.
Is there any way that I could somehow attach the second application to an instance of the first application to listen for "OnEmailSent" event?
So for further clarification, my specific scenario is that we have a custom third party application written in c# that raises an "OnEmailSent" event. We can see the event exists using reflector.
What we want to do is have some other actions take place when this component sends an email.
The most efficient way we can think of would be to be able to use some form of IPC as anders has suggested and listen for the OnEmailSent event being raised by the third party component.
Because the component is written in C# we are toying with the idea of writing another C# application that can attach itself to the executing process and when it detect the OnEmailSent event has been raise it will execute it's own event handling code.
I might be missing something, but from what I understand of how remoting works is that there would need to be a server defining some sort of contract that the client can subscribe to.
I was more thinking about a scenario where someone has written a standalone application like outlook for example, that exposes events that I would like to subscribe to from another application.
I guess the scenario I'm thinking of is the .net debugger and how it can attach to executing assemblies to inspect the code whilst it's running.
In order for two applications (separate processes) to exchange events, they must agree on how these events are communicated. There are many different ways of doing this, and exactly which method to use may depend on architecture and context. The general term for this kind of information exchange between processes is Inter-process Communication (IPC). There exists many standard ways of doing IPC, the most common being files, pipes, (network) sockets, remote procedure calls (RPC) and shared memory. On Windows it's also common to use window messages.
I am not sure how this works for .NET/C# applications on Windows, but in native Win32 applications you can hook on to the message loop of external processes and "spy" on the messages they are sending. If your program generates a message event when the desired function is called, this could be a way to detect it.
If you are implementing both applications yourself you can chose to use any IPC method you prefer. Network sockets and higher-level socket-based protocols like HTTP, XML-RPC and SOAP are very popular these days, as they allow you do run the applications on different physical machines as well (given that they are connected via a network).
You can try Managed Spy and for programmatic access ManagedSpyLib
ManagedSpyLib introduces a class
called ControlProxy. A ControlProxy is
a representation of a
System.Windows.Forms.Control in
another process. ControlProxy allows
you to get or set properties and
subscribe to events as if you were
running inside the destination
process. Use ManagedSpyLib for
automation testing, event logging for
compatibility, cross process
communication, or whitebox testing.
But this might not work for you, depends whether ControlProxy can somehow access the event you're after within your third-party application.
You could also use Reflexil
Reflexil allows
IL modifications by using the powerful
Mono.Cecil library written by Jb
EVAIN. Reflexil runs as Reflector plug-in and
is directed especially towards IL code
handling. It accomplishes this by
proposing a complete instruction
editor and by allowing C#/VB.NET code
injection.
You can either use remoting or WCF. See http://msdn.microsoft.com/en-us/library/aa730857(VS.80).aspx#netremotewcf_topic7.
What's the nature of that OnEmailSent event from that third party application? I mean, how do you know the application is triggering such an event?
If you are planning on doing interprocess communication, the first question you should ask yourself is: Is it really necessary?
Without questioning your motives, if you really need to do interprocess communication, you will need some sort of mechanism. The list is long, very long. From simple WM_DATA messages to custom TCP protocols to very complex Web services requiring additional infrastructures.
This brings the question, what is it you are trying to do exactly? What is this third party application you have no control over?
Also, the debugger has a very invasive way of debugging processes. Don't expect that to be the standard interprocess mechanism used by all other applications. As a matter of fact, it isn't.
You can implement a similar scenario with SQL Server 2005 query change notifications by maintaing a persistent SqlConnection with a .NET application that blocks until data changes in the database.
See http://www.code-magazine.com/article.aspx?quickid=0605061.
also WM_COPYDATA might be possible, see https://social.msdn.microsoft.com/Forums/en-US/eb5dab00-b596-49ad-92b0-b8dee90e24c8/wmcopydata-event-to-receive-data-in-form-application?forum=winforms
I'm using it for similar Purose (to notify that options have been changed)
In our C++/Cli-scenario (MFC-)programs communicate vith WM_COPYDATA with Information-String in COPYDATASTRUCT-Member lpData
(Parameterlist like "Caller=xyz Receiver=abc Job=dosomething"). also a C#-App can receive WM_COPYDATA-messages as shown in the link. Sending WM_COPYDATA from C# (to known Mainframe-Handle) is done by a cpp/cli-Assembly, (I didnt proove how sending WMCOPYDATA can bei done in C#).
PS in Cpp/Cli we send AfxGetMainWnd()->m_hWnd as WPARAM of WMCOPYDATA-Message and in C# (WndProc) m.WParam can be used as adress to send WM_COPYDATA