My client-server communication looks like this: there are some so called annoucements which are seperate messages used to exchange information. The idea is that annoucement is the common part of every message. Actually I suppose it will be the type of the message. The type decide what is the content. In UML class diagram Annoucement would be the class all other messages inherit.
I want to implement that idea in communication between two applications one written in C++ the other in C#. I thought I could write a message that contain one field with the type if the message (an enum field) . All additional information relevant to the type would be implemented as an extensions.
I have found some examples how to use extensions in C++, however I have no clue how to do it in C#. I know there are interfaces IExtensible and IExtension (in protobuf-net) but how can I use them? Internet resources seem to be poor in the matter.
I suppose in the past messages in C# used to be define similiar to fashion that they are still defined in C++ apps (using proto file and protoc). Can I use the same proto file to define the message in C#? How? Will extenions be interpreted or overriden?
If I could implement extensions, I would sent a message, parse it, check the type and use approriate function to maintain it. That sounds to me cool because I wouldn't have to take care of the type of the message I was going to read - I don't have to know the type before parsing.
There are a number of ways you could do this. I'm not actually sure extensions is the one I would leap for, but:
in your message type, you could have a set of fully defined fields for each sub-message, i.e.
base-message
{1-5} common fields
{optional 20} sub-message 1
{optional 21} sub-message 2
{optional 22} sub-message 3
{optional 23} sub-message 4
sub-message 1
{1-n} specific fields
where you would have exactly one of the sub-message object
alternatively, encapsulate the common parts inside the more specific message:
common field type
{1-n} fields
sub-message 1
{1} common field type
{2-m} specific fields
Either approach would allow you to deserialize; the second is trickier, IMO, since it requires you to know the type ahead of time. The only convenient way to do that is to prefix each with a different identifier. Personally I prefer the first. This does not, however, require extensions - since we know everything ahead of time. As it happens, the first is also how protobuf-net implements inheritance, so you could do that with type inheritance (4 concrete sub-types of an abstract base message type)and [ProtoInclude(...)]
Re extension data; protobuf-net does support that, however as mentioned in the blog this is not included in the current v2 beta. It will be there soon, but I had to put a line somewhere. It is included in the v1 (r282) download though
Note that protobuf-net is just one of several C#/.NET implementations. The wire format is the same, but you might also want to consider the directly ported version. If I had to summarise the difference I would say "protobuf-net is a .NET serializer that happens to be protobuf; protobuf-csharp-port is a protobuf serializer that happens to be .NET" - they both achieve the same end, but protobuf-net focuses on being idiomatic to C#/.NET where-as the port focuses more on having the same API. Either should work here of course.
Related
I have a set of strings like this:
System.Int32
string
bool[]
List<MyType.MyNestedType>
Dictionary<MyType.MyEnum, List<object>>
I would like to test if those strings are actually source code representations of valid types.
I'm in an environment, that doesn't support Roslyn and incorporating any sort of parser would be difficult. This is why I've tried using System.Type.GetType(string) to figure this out.
However, I'm going down a dirty road, because there are so many edge cases, where I need to modify the input string to represent an AssemblyQualifiedString. E.g. nested type "MyType.MyNestedType" needs to be "MyType+MyNestedType" and generics also have to be figured out the hard way.
Is there any helper method which does this kind of checking in .Net 2.0? I'm working in the Unity game engine, and we don't have any means to switch our system to a more sophisticated environment with available parsers.
Clarification
My company has developed a code generation system in Unity, which is not easily changed at this point. The one thing I need to add to it, is the ability to get a list of fields defined in a class (via reflection) and then separate them based on whether they are part of the default runtime assembly or if they are enclosed within #if UNITY_EDITOR preprocessor directives. When those are set, I basically want to handle those fields differently, but reflection alone can't tell me. Therefore I have decided to open my script files, look through the text for such define regions and then check if a field is declared within in them, and if true, put it in a separate FieldInfo[] array.
The one thing fixed and not changeable: All script will be inspected via reflection and a collection of FieldInfo is used to generate new source code elsewhere. I just need to separate that collection into individual ones for runtime vs editor assembly.
Custom types and nested generics are probably the hard part.
Can't you just have a "equivalency map to fully qualified name" or a few translation rules for all custom types ?
I guess you know by advance what you will encounter.
Or maybe run it on opposite way : at startup, scan your assembly(s) and for each class contained inside, generates the equivalent name "as it's supposed to appear" in your input file from the fully qualified name in GetType() format ?
For custom types of other assemblies, please note that you have to do things such as calling Assembly.LoadFile() or pass assembly name in second parameter to GetType() before to be able to load them.
See here for example : Resolve Type from Class Name in a Different Assembly
Maybe this answer could also help : How to parse C# generic type names?
Could you please detail what is the final purpose of project ? The problem is a bit surprising, especially for a unity project. Is it because you used some kind of weird serialization to persist state of some of your objects ?
This answer is more a few recommandations and questions to help you to clarify the needs than a definitive answer, but it can't hold in a single comment, and I think it provide useful informations
We would like to handle an entire BizTalk message (preferably in the form of an XLANGMessage) through a custom method (.net) that is exposed as a BRE Fact per this article.
Is it possible to define the data being passed to a particular BRE fact as being the entire message? If so, what steps are required to do so (other than defining the method's input parameter as an XLANGMessage)?
EDIT - We simply want to get the entire BizTalk message passed into some custom code so that we can process it - specifically inside the BRE through a vocabulary. The article linked above explains how to set up our custom code to be executed, but I am unable to find out how to set the data being passed to the aforementioned code to be the entire message being processed.
Technically, yes, as XLANGMessage is a .Net class and you can pass instances as Fasts to the Policy.
However, I don't think that would be a good idea. The BRE has it's own Xml Type, TypedXmlDocument, which is used to pass Xml Documents as Facts. This is what happens behind the scene with the Call Rules Shape.
XLANGMessage really just a container, the Part data can take many forms. If it's not XmlDocument, you should probably pass the Part data as it's native underlying Type.
Finally, that MSDN article title is a bit misleading. The BRE doesn't really use Assemblies specifically in any way. What you see there is just a Class Browser. It's the Classes in the Assemblies the BRE can use.
The BizTalk Business Rules Engine Pipeline Framework allows you to call a Business Rules Policy in a pipeline component. As boatseller answered, BizTalk usually wants message to be parsed into an XML format to process and the BRE also deals with XML facts.
(Full disclosure: The BRE Pipeline Framework is written by a colleague of mine at Datacom Systems New Zealand)
I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
I am preparing to write a platform independent socket protocol. After some initial research protobuf seems the way to go. I am new to protobuf and I can't seem to figure out one specific issue.
My requirements are:
Completely platform independent client;
C# server;
Async message communication;
Work from a .proto file (so no inference from existing classes);
Must be able to send messages without the server/client knowing up front which message type to expect.
I have already found the (De)SerializeWithLengthPrefix methods, and they are a start. What I can't figure out is how to receive a message if I do not know the type up front.
I have already seen that protobuf-net supports inheritance and can detect the type of the message, so for protobuf-net, inheriting all messages from a common base type would work. However, this is not platform independent and I am looking for a platform independent way of doing this.
So my question is: how do I send my messages so that any client can deserialize them without knowing the type up front?
If the requirement is to work with multiple messages of different types:
Just associate a unique number with each different type of message, and use as a prefix to the message; you can do this trivially with the optional integer parameter to the overloaded SerializeWithLengthPrefix method. The client would have to pre-process this prefix (or it can be handled within protobuf-net by Serializer.NonGeneric, which has a deserialization method that provides a callback for obtaining the type from this prefix number).
If the requirement is to work with completely unknown data:
Given you requirements, I suspect Jon's port would be a better fit; since your emphasis is on platform independence and .proto, and not on inference (where protobuf-net excels) or other .NET specific extensions / utilities.
Note that while a .proto can be compiled (via protoc) to a regular protobuf stream, working with entirely foreign data at the receiver is... irregular. It can probably be done by treating everything as extensions or by working with the coded streams, but...
Edit following discussion in comments:
A simple schema here could be simply:
message BaseMessage {
optional SomeMessage someMessage = 1;
optional SomeOtherMessage someOtherMessage = 2;
optional SomeThirdMessage someThirdMessage = 3;
}
message SomeMessage {...}
message SomeOtherMessage {...}
message SomeThirdMessage {...}
(you could optionally add a discriminator, if that helps)
This is actually essentially how protobuf-net treats inheritance, but is easily represented from other clients, and handles everything like lengths of sub-messages automatically.
I want to know how deserialization works and is it really needed to have the assembly on that system where the deserialization is happening.
If you haven't looked at MSDN yet, do so. It will tell you everything you need to know about the serialization/deserialization process...at least enough to use it. The link I gave you is specifically 'how to deserialize.'
As far as the more technical aspects, the pieces of information that will get serialized will be exactly what is required to fill that structure/class/object.
I'm not really sure what you mean by the 2nd part of your question about the assembly. However, if you are serializing a struct (for instance), then in order to deserialize to another machine, or application, you must have that exact same struct available: name, fields, data types, etc.
If you're looking for the exact details, you can boot up an instance of Reflector and point it to mscorlib and look into the various classes in the System.Runtime.Serialization namespace. Here's the high-level idea (as I understand it):
The first step is ensuring that the type system that wrote binary stream is the same as the type system that is reading the binary stream. Since so little meta-information is attached to the output, problems can arise if we're not looking at the same type. If we have two classes named A, but the writer thinks an A is class A { int m_a, m_b; } and the reader thinks A is class A { int m_b, m_a; }, we're going to have problems. This problem is much worse if the types are significantly different. Keep this in mind, it will come back later.
Okay, so we're reading and writing the same types. The next step is finding all the members of the object you want to serialize. You could do this through reflection with a call like typeof(T).GetFields(~System.Reflection.BindingFlags.Default), but that will be super-slow (rule of thumb: reflection is slow). Luckily, .NET's internal calls are much faster.
Step 1: Now we get to writing. First: the writer writes the strong-name assembly that the object we're serializing resides in. The reader can then confirm that they actually have this assembly loaded. Next, the object's namespace-qualified type is written, so the reader can read into the proper object. This basically guarantees that the reading type and writing type is the same.
Step 2: Time to actually write the object. A look at the methods of Formatter lets us know that there is some basic functionality for writing ints, floats and all sorts of simple types. In a pre-determined order (the order they are declared, starting from the fields of the base class), each field is written to the output. For fields that are not the simple types, recurse back to step 1 with the object in that field.
To deserialize, you perform these same steps, except replace all the verbs such as 'write' with verbs like 'read.' Order is extremely important.