How do I share properties across COM+ application pool instances? - c#

I have a COM+ application written in C# (ServicedComponent.) The application pool size > 1 in all cases. I am using SharedPropertyGroups to retain and share data. From my testing it is not clear if all the running instances of the application is sharing the same values.
Are the properties stored in SharedPropertyGroup are shared across all the instances of the same COM+ application?

Each application pool (DLLHost process) will get it's own shared property manager. From COM+ Shared Property Manager Concepts:
"Shared properties stored in the SPM are available only to objects running in the same process."
So, the shared property manager will let you share transient state inside of one application (pool instance).
If you want to share state between multiple processes then you would probably want to look at an out of process cache approach (e.g. Windows Server AppFabric Caching or a database depending on the requirements).
Also see .NET Enterprise Services and COM+ 1.5 Architecture where they describe some of the issues when using application pooling:
Memory used by the Shared Property Manager (SPM) is process specific.
Application pooling may impact any application that assumes it is
using the only instance of the SPM on that machine. There is no longer
any common highest-level data store (since components can span
processes) for all instances of a COM+ component using application
pooling. Alternatively, you can use a cached middle-tier database to
store common state that will not only span instances in a process but
processes as well. When doing this, you may want to consider using a
pooled component that keeps a persistent connection to a database
specifically for middle-tier serialization operations. In reality,
this is a much better choice even without application pooling, due to
the issues surrounding locking and performance of the SPM.

Related

Multi level cache - Appfabric with MemoryCahe

In my current setup I have a dedicated Appfabric server. Most of the objects stored there are reference objects which means most of the operations are 'Get' operations. Therefore I've considered using LocalCache.
Unfortunately, recently I experienced problems with the availability of the cache server resulting from various network issues. The application server continues to work directly with the DB in these cases thanks to a provider I've written. However, it has a very large impact on performance as expected.
I want to be able to use some kind of a local cache for the highly referenced objects, even when the cache server is down. For this purpose I've considered using the MemoryCache of .Net 4. I don't really care about the objects being stale and I rely on a timeout eviction policy, therefore I don't worry about synchronization between the application servers.
I wanted to hear what do you think about this solution.
- Are there any other points I should consider?
- Is there a better solution to provide fast access for highly referenced objects even when the cache server is down?
Appfabric's LocalCache is a client cache, local and inproc to the client application, which stores references of frequently used data, so application does not need to deserialize same object again. However since LocalCache works with the cache server, it would not work if cache server is down.
One solution possible to your problem is as you have mentioned, having an independant client cache so even if cache server goes down, client cache will still be available.
When relying on inproc cache you will have to keep it in mind that in-process caches store reference of cached objects. If your application modifies object after getting from cache, it will be modified in cache as well. Also if multiple threads may end up modifying same item in cache, you will need thread synchronization for such objetcs.
However even using an independant client cache, you application may end up hitting the database frequently, since data in client cache of one application server will not be accessable to other servers.
A better solution might be using replicated cache servers, where each server will have all cached data. This will not only improve get performace for referential data but also will eliminate single point of failure, like in your case.
If Appfabric is not a hard requirement for application, you may look into NCache for better scalability and high availablility.
Did you consider AppFabric's local cache feature? Or is it not suitable for you?

Asp.Net static objects almentar the availability

I have an application that use a static class too large and complex for this reason can not use the standard Asp.net Session. More telho problems with the stability of my application because when the pool closed by an error in a estarna dlls, all static variables are discharged.
I wonder if there is a setting for each "client" open a pool. So if a User does not fall knocks others.
If you have a static class there is only one of that class for the application pool. If this class has something different for each user it shouldn't be static. If the class only contains general information not pertaining to a specific session and you don't want to make it an instance class then make sure there are no exceptions being thrown in the static class' constructor.
In addition to YetAnotherSoftwareDeveloper's answer, application pooling is used to provide a mechanism that can be used to isolate applications for stability and security reasons, not to isolate individual client sessions.
If you have an application that is unstable, you can keep it from having a detrimental effect on other applications by isolating it in its own application pool. This will not provide any stabilizing effect on the application having problems, but will keep it from crashing other applications on the same server.

C# Multiple console application runs and isolation

If I deploy a C# console app, which does the following:
reads message (ActiveMQ)
processes message contents
writes result to database (SQL Server)
Would there be any issues with running this multiple times e.g. what if I created a batch file and ran 100 instances? Would there be any conflict given that each instance would be using the same shared DLLs e.g. Apache.NMS.ActiveMQ.
The other option would be to deploy the app multiple times, but I'd rather not have to manage duplicated folders. I'm also avoiding threading at the moment but that will be an option for further development in future.
Just want to clarify what happens with those DLLs, and check that there wouldn't be a threading type conflict, e.g. one instance writing the results of another instance's processing to the database...
No, there will be no problem with loading the same DLL files into multiple processes as you describe. You would only run into problems running multiple instances of the same application if the process needed exclusive access to a shared resource, like a file. With regard to writing to a database, as long as you design your application so that multiple clients can write data without overwriting data or causing some sort of inconsistency with the domain integrity of the data then again, no problem.
However, I would strongly suggest you look at making you application multi-threaded if it is concurrency you need, or Application Domains if it is isolation you need. Running multiple processes is much more expensive in terms of resources than either of these two options.

With 2 web servers, will a singleton class have 2 instances?

With 2 web servers, will a singleton class have 2 instances?
Both the web-servers will have separate instances of their application processes be it .net or java. So Yes, both the servers will have their individual instances of your singleton class.
Regardless of the fact that these two web servers are two different physical machines, even if they are on the same server, they will definitely run entirely on different processes. Each process will load its objects in memory separately from any other process.
specifically in case of asp.net -
Even in the single web server, each site will cause separate instance of the Singleton class. Because each site in asp.net worker process is loaded in separate application domain, no two domains can interfere between each others' objects. So in case of asp.net even the single web server having single asp.net worker process can/will have multiple instances of the singleton class each separate from another.
Yes, you have a Singleton per each JVM and even class loaders.
See this When is a Singleton not a Singleton? article (for Java).
What do you mean by "2 web servers"?
Static field (in singleton) is scoped to Application Domain (in .net).
So if your two web servers run in two separate application domains, then yes, otherwise no.
In fact you may well have 2 instances of a singleton calls in ONE web server, if its part of a webapp that is deployed twice.
In Java, the classloader is part of a class's identity. You can load the same class twice with different classloaders, and all static fields will exist twice. C# has a similar mechanism.
It is possible to create a web server that will multiplex over everything - connections, listening sockets, even interfaces. It will be still one instance of the class, only one thread, and on top of that a pretty small memory footprint. The caveat is, that while from most practical standpoints it will look like two servers, it will still be only one server (and if it crashes, it crashes whole...)
This approach is not nearly as popular as multi-threaded webservers though, because while lighter on hardware, it is harder for developer to handle - you have to explicitly multiplex between everything, and juggle all connections in non-blocking calls. If you spawn some extra threads, the OS takes away a lot of work from you, allowing you to write a more feature-rich server easier.
Of course even in a single-threaded server spawning of a second server in a separate task is still possible, just by the user/admin/whoever executing the binary again, with different config. It takes some pretty fancy programming to prevent that from happening.
This is why JSP/Servlets provide the idea of "session" and "application" data. These should be shared across servers in a multi-server environment.
As an extension of this question, specifically for .NET web apps, you also need to pay attention to SessionState handling. Assuming that sessions aren't "sticky" (user stays on one web server once session is established), you'll need to change SessionState to out-of-process. This can either be the ASP.NET session state server, or SQL Server, but the key point to remember is that SessionState isn't automatically shared across servers, unless you make it shared by going out-of-process. Also, anything you put in SessionState needs to be serializable; add the [Serializable] attribute to any classes you use in SessionState.

Question about how to implement a c# host application with a plugin-like architecture

I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.

Categories