I have a large app which uses COM via .net remoting to call from the web tier to the middle tier.
It's quite slow to startup and to run when in this mode. Both sides of the COM boundary are our code.
I'd like to be able to (optionally) run it in a single process.
Quite a bit of the behaviour relies on calls to ServicedComponents having all their arguments serialized, and that changes made to the objects inside the components don't leak out unless the argument is a 'ref' argument.
My current plan to force this two-process app into a single process without needing to change too much code is by using a fake middle tier boundary with custom .net remoting.
If I change all the:
class BigComponent : ServicedComponent {
...
}
into
[FakeComponent]
class BigComponent : ContextBoundObject {
...
}
Then I can write a custom ContextAttribute to fake the process boundary and make the arguments serialize themselves:
i.e.
[AttributeUsage(AttributeTargets.Class)]
public class FakeComponentAttribute :
ContextAttribute,
IContributeServerContextSink
{
... lots of stuff here
}
As per http://msdn.microsoft.com/en-us/magazine/cc164165.aspx
Now this works fine, so far, and I can intercept all calls to methods on these classes.
However, I can only view the IMethodCallMessage.Args in the IMessageSink.ProcessMessage call -- I don't seem to be able to replace them with objects of my choice.
Any time I change entries in the IMethodCallMessage.Args array, my changes are ignored. From what I can tell in Reflector, this interface is a wrapper around a native object in the runtime itself, and I can't write to this object, just read it.
How can I modify the arguments to method calls in .net remoting?
Do I need to implement my own Channel? Is there a "local" channel tutorial out there I can crib from?
My aim is to have these Components act like remote objects (in that all their args get serialized on the way in to the method, and that their return value is serialized on the way out), but have the remote endpoint be inside the same process.
I have not found a way to edit the argument array as it passes through the IMessageSink.
In the end, I had to make the argument object classes aware of this issue, and implement a new interface IFakeRemotingAware. This allowed the complex object arguments which exhibit the pass-by-val behaviour due to serialization/deserialization when using remoting to simulate that behaviour when using fake remoting.
The interface has two methods: EnteringFakeRemote which causes the object to cache a local copy of its state, and LeavingFakeRemote which causes the object to restore its state from the cache.
Related
Is there any way to glue metadata to an object in C#?
Context: Framework which is sending messages between peers over the network. Messages can be arbitrary serializable user-defined .NET types.
Of course, when a message is sent by a peer, the framework could wrap the object into a Message class which saves the metadata, and the receiver could unwrap it. However, the processing method of the peer could decide to resend the message to another peer - however, I want to keep the original metadata. The user should not be required to use Message.RealMessage all the time except when resending it.
I thought about keeping the wrapped instance in a dictionary and upon resending looking up if there is already a wrapped instance in the dictionary and resending that one, however, as messages may not be resent at all (or resent multiple times) this would require more and more memory.
Any solutions? Maybe C# directly supports gluing additional information to an object? Normally I would go for an internal interface, however, the user would have to derive all its classes from a framework's base class, which is not possible.
Edit: I kind of want to say "here is an object of WrappedMessage but you are only allowed to use the interface provided by the class T".
There is the ConditionalWeakTable that should do what you want a little better than using directly a Dictionary.
To quote:
Enables compilers to dynamically attach object fields to managed objects.
You can ignore the part about the class being for compiler :-)
I am currently building an emulator in C#/Silverlight. Because we are emulating a particular software domain, we have domain-level classes (Cube, CubeSet, BaseApp, etc.) that we have to implement within the scope of our emulator. Additionally, these domain-level classes have to be available to the application developer because they are accessible to applications which will be emulated.
So what we have is a .dll which is a compilation of just the domain-level classes, and then within the emulator implementation itself we have a package of the same domain-level classes.
The goal is to dynamically instantiate the application object, which is doable, and then call a sequence of that application's methods to carry out the emulation. However, in calling one of the methods, we have to pass in a domain-level object which is instantiated within the emulator implementation. We have to call AssociateCubes (which takes a CubeSet parameter) on the dynamically instantiated application. When I try to do that dynamically, I'm getting an InvalidCastException which (amusingly enough) says that a "CubeSet" object cannot be cast as a "CubeSet" object. An example of the code being used to dynamically access the application is:
Object o = Activator.CreateInstance(appType);
MethodInfo AssocCubes = o.GetType().GetMethod("AssociateCubes");
AssocCubes.Invoke(o, new object[] { Cubes });
where Cubes is of type CubeSet in the emulator, and the appType is as given by the user.
Is there any way to force some sort of link between the two so that the compiler recognizes that in reality the same class, or is it that the two classes are completely distinct and cannot be associated in such a way to allow an object of one type to be cast as the other.
One solution I have considered is simply defining a method to manually copy the contents of one object to an instance in the emulator, but the problem therein is that the application developer can define their own methods for the application class to be used as helper methods.
I may not have explained everything completely, so I can offer any clarifications that may help expose a potential solution.
The InvalidCastException only shows last portion of full class name for convinience, but types are compared on full identity: Full Name (including namespaces) and assembly it is coming from (which may have strong name if signed).
Consider using Unit Testing framework for "mocking" objects. Or at least read on how such frameworks wrap classes.
The real fix is to use testable class hierarchies. Often using interfaces instaed of concreate classes help to solve this type if issues.
I don't want to discount the previous answer given, but I have found a solution as I described in the comment I wrote.
What I do instead is pull the domain layer out of the emulator project and compile it separately as a DLL. Now that DLL is referenced in the emulator and the separate applications, so when the types are loaded dynamically they are considered to be the same type after all.
I'm coming to this question from exploring the XNA framework, but I'd like a general understanding.
ISomeService someService = (ISomeService)Game.GetServices(typeof(ISomeService));
and then we do something with whatever functions/properties are in the interface:
someService.DoSomething(); // let's say not a static method but doesn't matter
I'm trying to figure out why this kind of implementation is any better than:
myObject = InstanceFromComponentThatWouldProvideTheService();
myObject.DoSomething();
When you use the services way to get your interface, you're really just getting an instance of the component that provides the service anyway. Right? You can't have an interface "instance". And there's only one class that can be the provider of a service. So all you really have is an instance of your component class, with the only difference being that you only have access to a subset of the component object (whatever subset is in the interface).
How is this any different from just having public and private methods and properties? In other words, the public methods/properties of the component is the "interface", and we can stop with all this roundaboutness. You can still change how you implement that "interface" without breaking anything (until you change the method signature, but that would break the services implementation too).
And there is going to be a 1-to-1 relationship between the component and the service anyway (more than one class can't register to be a provider of the service), and I can't see a class being a provider of more than one service (srp and all that).
So I guess I'm trying to figure out what problem this kind of framework is meant to solve. What am I missing?
Allow me to explain it via an example from XNA itself:
The ContentManager constructor takes a IServiceProvider. It then uses that IServiceProvider to get a IGraphicsDeviceService, which it in turn uses to get a GraphicsDevice onto which it loads things like textures, effects, etc.
It cannot take a Game - because that class is entirely optional (and is in a dependent assembly). It cannot take a GraphicsDeviceManager (the commonly used implementation of IGraphicsDeviceService) because that, like Game is an optional helper class for setting up the GraphicsDevice.
It can't take a GraphicsDevice directly, because you may be creating a ContentManager before the GraphicsDevice is created (this is exactly what the default Game class does). So it takes a service that it can retrieve a graphics device from later.
Now here is the real kicker: It could take a IGraphicsDeviceService and use that directly. BUT: what if at some time in the future the XNA team adds (for example) an AudioDevice class that some content types depend on? Then you'd have to modify the method signature of the ContentManager constructor to take an IAudioDeviceService or something - which will break third-party code. By having a service provider you avoid this issue.
In fact - you don't have to wait for the XNA team to add new content types requiring common resources: When you write a custom ContentTypeReader you can get access to the IServiceProvider from the content manager and query it for whatever service you like - even your own! This way your custom content types can use the same mechanism as the first-class XNA graphics types use, without the XNA code having to know about them or the services they require.
(Conversely, if you never load graphics types with your ContentManager, then you never have to provide it with a graphics device service.)
This is, of course, all well and good for a library like XNA, which needs to be updatable without breaking third-party code. Especially for something like ContentManager that is extendible by third parties.
However: I see lots of people running around using DrawableGameComponent, finding that you can't get a shared SpriteBatch into it easily, and so creating some kind of sprite-batch-service to pass that around. This is a lot more complication than you need for a game which generally has no versioning, assembly-dependency, or third-party extensibility requirements to worry about. Just because Game.Services exists, doesn't mean you have to use it! If you can pass things (like a SpriteBatch instance) around directly - just do that - it's much simpler and more obvious.
See http://en.wikipedia.org/wiki/Dependency_inversion_principle (and it's links) for a good start as to the architectural principles behind it
Interfaces are clearer and easier to mock.
That can be important, depending on your unit test policy.
Using a service provider is also a way of better controlling what portions of your code have access to certain other portions of your code. Similarly to passing an object through your code, you can pass an IServiceProvider implementation through the code to specific modules. This would allow for those modules to access certain services that are accessible through the service provider.
You can have many classes implement the IServiceProvider interface, each of which could provide access to one or more services - they are not restricted to returning a single instance (whether that be to themselves or another object).
For example, a use may be to have an IServiceProvider that contains services for keyboard handling, mouse handling and AI algorithms. Passing this interface to different modules or managers within your code will allow those modules or managers to retrieve the services they require (such as an EnemyManager needing access to the AI service).
I'm writing some kind of Computing farm with central server giving tasks and nodes that compute them.
I wanted to write it in such way, that nodes don't know what exactly they are computing. They get (from server) an object that implements IComputable iterface, has one method, .compute() that returns IResult type object and send it to the server.
Server is responsible for preparing these object and serving them through .getWork() method on wcf service, and gets the results with .submitResult(IResult result) method.
Problem is, that worker nodes need to know not only the interface, but full object implementation.
I know that Java can serialize method (probably to bytecode) through RMI. Is it possible with c# ?
What you will have to do is put the type which implements the method you are describing into a separate assembly. You can then send the assembly as a byte array to your server, where it will load the assembly, insptect it for types that fit your interface, and then load them. This is the basic pattern for plug-ins using .Net.
Some care has to be taken though. If you are accepting code from arbitrary sources, you will have to lockdown what these loaded assemblies can do (and it is good practice to do even if you trust the source).
A good classic example for how to do this is the Terrarium project. It is a case study that Microsoft produced that involved the viral spreading of arbitrary assemblies in a secure fashion.
You can do
System.Expression.LambdaExpression<Func<result>> lambda = MyFunction;
and then you can serialize expression to string and deserialize on the server
I am developing an .net application which heavely depends on plugins. The application itself contains an connection to a remote server.
Recently I digged into Application domains and see them as the ideal solution for isolating the plugin code from the rest of the application.
However there is one big disadvantage which makes me unable to implement the application domains for hosting the plugins. It seems there is no way to pass an object by reference to another application domain which is needed to pass an reference to the connection object.
I was hoping someone could give me a workaround so I can pass an reference to that object.
Note: Creating a proxy is out of the question, the connection layer already acts as a proxy since the classes are auto generated.
Note2: System.AddIn can not be used as it is not available on the compact framework.
Have you tried deriving from MarshalByRefObject? It's a pain in that it screws up your inheritance hierarchy, but I think it's what you want.
From the docs:
MarshalByRefObject is the base class
for objects that communicate across
application domain boundaries by
exchanging messages using a proxy.
Objects that do not inherit from
MarshalByRefObject are implicitly
marshal by value. When a remote
application references a marshal by
value object, a copy of the object is
passed across application domain
boundaries.
MarshalByRefObject objects are
accessed directly within the
boundaries of the local application
domain. The first time an application
in a remote application domain
accesses a MarshalByRefObject, a proxy
is passed to the remote application.
Subsequent calls on the proxy are
marshaled back to the object residing
in the local application domain.
Types must inherit from
MarshalByRefObject when the type is
used across application domain
boundaries, and the state of the
object must not be copied because the
members of the object are not usable
outside the application domain where
they were created.
In my experience, it can be pretty limiting - you really need to do as little as possible across the AppDomain boundary, preferrably restricting yourself to operations which only require primitive types, strings, and arrays of both. This may well be due to my own inexperience in working with multiple AppDomains, but it's just a warning that it's a bit of a minefield.
To talk to the same instance between AppDomains, it must inherit from MarshalByRefObject. Done this way, every method call to the object (including properties etc) is actually a remoting call to the other app-domain. Does that help?
Be aware that clean-up of MarshalByRefObject proxies are cleaned up based on a lease. In short if you don't use the object for a specific time it will be reclaimed. You can control this by overriding InitializeLifetimeService to return a lease object which matches you needs. If you return null you effectively disable the leasing and then the object is only reclaimed when the AppDomain is unloaded.