C# object metadata - c#

Is there any way to glue metadata to an object in C#?
Context: Framework which is sending messages between peers over the network. Messages can be arbitrary serializable user-defined .NET types.
Of course, when a message is sent by a peer, the framework could wrap the object into a Message class which saves the metadata, and the receiver could unwrap it. However, the processing method of the peer could decide to resend the message to another peer - however, I want to keep the original metadata. The user should not be required to use Message.RealMessage all the time except when resending it.
I thought about keeping the wrapped instance in a dictionary and upon resending looking up if there is already a wrapped instance in the dictionary and resending that one, however, as messages may not be resent at all (or resent multiple times) this would require more and more memory.
Any solutions? Maybe C# directly supports gluing additional information to an object? Normally I would go for an internal interface, however, the user would have to derive all its classes from a framework's base class, which is not possible.
Edit: I kind of want to say "here is an object of WrappedMessage but you are only allowed to use the interface provided by the class T".

There is the ConditionalWeakTable that should do what you want a little better than using directly a Dictionary.
To quote:
Enables compilers to dynamically attach object fields to managed objects.
You can ignore the part about the class being for compiler :-)

Related

SignalR and object based payloads

I'm playing a bit with SignalR and relating it back to some previous Pub/Sub work. In it, we have a Base Event with a couple of mandatory properties and then several Derived Events for specific payloads.
With SignalR, it appears that I need to define a hub based on each of the derived events as Send is going to deal with a specific type. For example, if I create a hub for the base class I can send any of the derived types or the base type without error but I always get back a base type losing any of the derived type's properties.
Seems my choices are a hub for each type or putting the derived properties in some type of blob to be parsed out by the receiver.
How far off is my thinking?
AFAIK, SignalR is based on a dynamic way to describe and (de)serialize payloads, so its runtime tries to match the type specified on the receiving part, without trying to further match any derived type. It's a mechanism which has the advantage of being able to work without requiring to share types across clients and server, but the disadvantage you are experiencing. This should explain what you see.
You could base your solution on the usage of dynamic, if you want to keep your hierarchy of payloads you'll have to take care yourself of deserialize the received dynamic value into instances of those, maybe with the help of a "record type" member on the base class. You would not need to do a full parsing.

Streaminsight user defined functions limitation

What are the limitation of stream insight user defined functions?
Does the object need to be serializable?
Can it call external (remote) services?
If so these look to be very - very - very powerful!
Off the top of my head, a User Defined Function (UDF) is a static method call and operates on one event at a time. If you need something to work with more than one event at a time, you'll need to take a look at User Defined Operators (UDO) or User Defined Aggregates (UDAs). If you need to maintain state for any reason, you should be looking at UDOs or User Defined Stream Operators (UDSOs).
Remember that your payload classes only provide a schema to StreamInsight. So they don't need to be marked as serializable. Anything that gets serialized by StreamInsight will need to be marked serializable (i.e. configuration classes for adapters).
You can call out to external/remote services using the different UDFs, UDOs, UDAs, and UDSOs. However, these calls will be effectively blocking calls on one of the StreamInsight scheduler threads and this will increase latency. Event input and output should be done by the adapters only and the UDFs etc, should be used for processing the streams.

Command pattern and complex operations in C#

I am writing a program in C# that needs to support undo/redo. For this purpose, I settled on the Command pattern; tldr, every operation that manipulates the document state must be performed by a Command object that knows about the previous state of the document as well as the changes that need to be made, and is capable of doing/undoing itself.
It works fine for simple operations, but I now have an operation that affects several parts of the document at once. Likewise, the Command object must be smart enough to know all the old state it needs to preserve in case it needs to be undone.
The problem is exposing all that state using public interfaces has the potential for misuse if someone attempts to call the interface directly, which can lead to state corruption. My insticts tell me the most OO way of doing this is to expose specialized Command classes -- rather than allowing you to directly manipulate the state of the document, all you can do is ask the document to create a Command object which has access to its internal state and is guaranteed to know enough to properly support undo/redo.
Unfortunately, C# doesn't support the concept of friends, so I can't create a Command class that has access to document internals. Is there a way to expose the private members of the document class to another class, or is there some other way to do what I need without having to expose a lot of document internals?
It depends, if you are deploying a library your Document could declare 'internal' methods to interact with it's internal state, these methods would be used by you Command class, internal methods are limited to the assembly they are compiled.
Or you could nest a private class to your Document, that way allowing it to access Document's internal state and expose a public interface to it, your Document would then create a command class hidden by that interface.
First, C# has the internal keyword that declares "friend" accessibility, which allows public access from within the entire assembly.
Second, the "friend" accessibility can be extended to a second assembly with an assembly attribute, InternalsVisibleTo, so that you could create a second project for your commands, and yet the internals of the document will stay internal.
Alternatively, if your command objects are nested inside the document class, then they will have access to all its private members.
Finally, complex commands could also simply clone the document before making changes. That is an easy solution, albeit not very optimized.
You could always access fields and properties, private or not, through reflection (Type.GetField(string, BindingFlags.Private) & friends).
Maybe with a custom attribute on the class (or the field/property) to automate the process of grabbing enough state for each Command?
Instead of having a command doing changes at different places of the document, you could have two dummy commands that mark the start and end of multi-step operations. Let us call them BeginCommand and EndCommand. First, you push the BeginCommand on the undo-stack, and then you perform the different steps as single commands, each of them doing a change at a single place of the document only. Of cause, you push them on the undo-stack as well. Finally, you push the EndCommand on the undo-stack.
When undoing, you check whether the command popped from the undo stack is the EndCommand. If it is, you continue undoing until the BeginCommand is reached.
This turns the multi-step command into a macro-command delegating the work to other commands. This macro-command itself is not pushed on the undo stack.

Use .net remoting to call a local "remote"?

I have a large app which uses COM via .net remoting to call from the web tier to the middle tier.
It's quite slow to startup and to run when in this mode. Both sides of the COM boundary are our code.
I'd like to be able to (optionally) run it in a single process.
Quite a bit of the behaviour relies on calls to ServicedComponents having all their arguments serialized, and that changes made to the objects inside the components don't leak out unless the argument is a 'ref' argument.
My current plan to force this two-process app into a single process without needing to change too much code is by using a fake middle tier boundary with custom .net remoting.
If I change all the:
class BigComponent : ServicedComponent {
...
}
into
[FakeComponent]
class BigComponent : ContextBoundObject {
...
}
Then I can write a custom ContextAttribute to fake the process boundary and make the arguments serialize themselves:
i.e.
[AttributeUsage(AttributeTargets.Class)]
public class FakeComponentAttribute :
ContextAttribute,
IContributeServerContextSink
{
... lots of stuff here
}
As per http://msdn.microsoft.com/en-us/magazine/cc164165.aspx
Now this works fine, so far, and I can intercept all calls to methods on these classes.
However, I can only view the IMethodCallMessage.Args in the IMessageSink.ProcessMessage call -- I don't seem to be able to replace them with objects of my choice.
Any time I change entries in the IMethodCallMessage.Args array, my changes are ignored. From what I can tell in Reflector, this interface is a wrapper around a native object in the runtime itself, and I can't write to this object, just read it.
How can I modify the arguments to method calls in .net remoting?
Do I need to implement my own Channel? Is there a "local" channel tutorial out there I can crib from?
My aim is to have these Components act like remote objects (in that all their args get serialized on the way in to the method, and that their return value is serialized on the way out), but have the remote endpoint be inside the same process.
I have not found a way to edit the argument array as it passes through the IMessageSink.
In the end, I had to make the argument object classes aware of this issue, and implement a new interface IFakeRemotingAware. This allowed the complex object arguments which exhibit the pass-by-val behaviour due to serialization/deserialization when using remoting to simulate that behaviour when using fake remoting.
The interface has two methods: EnteringFakeRemote which causes the object to cache a local copy of its state, and LeavingFakeRemote which causes the object to restore its state from the cache.

how to use extensions from protocol buffers to maintain 'general' message

My client-server communication looks like this: there are some so called annoucements which are seperate messages used to exchange information. The idea is that annoucement is the common part of every message. Actually I suppose it will be the type of the message. The type decide what is the content. In UML class diagram Annoucement would be the class all other messages inherit.
I want to implement that idea in communication between two applications one written in C++ the other in C#. I thought I could write a message that contain one field with the type if the message (an enum field) . All additional information relevant to the type would be implemented as an extensions.
I have found some examples how to use extensions in C++, however I have no clue how to do it in C#. I know there are interfaces IExtensible and IExtension (in protobuf-net) but how can I use them? Internet resources seem to be poor in the matter.
I suppose in the past messages in C# used to be define similiar to fashion that they are still defined in C++ apps (using proto file and protoc). Can I use the same proto file to define the message in C#? How? Will extenions be interpreted or overriden?
If I could implement extensions, I would sent a message, parse it, check the type and use approriate function to maintain it. That sounds to me cool because I wouldn't have to take care of the type of the message I was going to read - I don't have to know the type before parsing.
There are a number of ways you could do this. I'm not actually sure extensions is the one I would leap for, but:
in your message type, you could have a set of fully defined fields for each sub-message, i.e.
base-message
{1-5} common fields
{optional 20} sub-message 1
{optional 21} sub-message 2
{optional 22} sub-message 3
{optional 23} sub-message 4
sub-message 1
{1-n} specific fields
where you would have exactly one of the sub-message object
alternatively, encapsulate the common parts inside the more specific message:
common field type
{1-n} fields
sub-message 1
{1} common field type
{2-m} specific fields
Either approach would allow you to deserialize; the second is trickier, IMO, since it requires you to know the type ahead of time. The only convenient way to do that is to prefix each with a different identifier. Personally I prefer the first. This does not, however, require extensions - since we know everything ahead of time. As it happens, the first is also how protobuf-net implements inheritance, so you could do that with type inheritance (4 concrete sub-types of an abstract base message type)and [ProtoInclude(...)]
Re extension data; protobuf-net does support that, however as mentioned in the blog this is not included in the current v2 beta. It will be there soon, but I had to put a line somewhere. It is included in the v1 (r282) download though
Note that protobuf-net is just one of several C#/.NET implementations. The wire format is the same, but you might also want to consider the directly ported version. If I had to summarise the difference I would say "protobuf-net is a .NET serializer that happens to be protobuf; protobuf-csharp-port is a protobuf serializer that happens to be .NET" - they both achieve the same end, but protobuf-net focuses on being idiomatic to C#/.NET where-as the port focuses more on having the same API. Either should work here of course.

Categories