We would like to handle an entire BizTalk message (preferably in the form of an XLANGMessage) through a custom method (.net) that is exposed as a BRE Fact per this article.
Is it possible to define the data being passed to a particular BRE fact as being the entire message? If so, what steps are required to do so (other than defining the method's input parameter as an XLANGMessage)?
EDIT - We simply want to get the entire BizTalk message passed into some custom code so that we can process it - specifically inside the BRE through a vocabulary. The article linked above explains how to set up our custom code to be executed, but I am unable to find out how to set the data being passed to the aforementioned code to be the entire message being processed.
Technically, yes, as XLANGMessage is a .Net class and you can pass instances as Fasts to the Policy.
However, I don't think that would be a good idea. The BRE has it's own Xml Type, TypedXmlDocument, which is used to pass Xml Documents as Facts. This is what happens behind the scene with the Call Rules Shape.
XLANGMessage really just a container, the Part data can take many forms. If it's not XmlDocument, you should probably pass the Part data as it's native underlying Type.
Finally, that MSDN article title is a bit misleading. The BRE doesn't really use Assemblies specifically in any way. What you see there is just a Class Browser. It's the Classes in the Assemblies the BRE can use.
The BizTalk Business Rules Engine Pipeline Framework allows you to call a Business Rules Policy in a pipeline component. As boatseller answered, BizTalk usually wants message to be parsed into an XML format to process and the BRE also deals with XML facts.
(Full disclosure: The BRE Pipeline Framework is written by a colleague of mine at Datacom Systems New Zealand)
Related
I'd like to validate my understanding of Saxon's XSLT objects and concurrency.
Basically, I need custom resolvers to return request-specific data to transforms, and currently, I'm creating new instances of the resolvers for each transform for each request. I've had early reports of data from one request used in another, which is a significant problem.
Using Saxon 9.6.0.6 HE on .NET 4.6 (C#), Windows 7/Server 2012.
My code can execute many different shared transforms for many concurrent requests. Saxon's compiled XSLT is a must for performance. Currently, the code is multithreaded (using TPL and async where appropriate), and does not use locking (and want to avoid this if possible).
I've had occasional reports of data being incorrectly 'leaked' across requests in the output of transforms (ie, likely a concurrency issue). I'm not sure if this is linked to the behaviour of the custom XmlResolver or the custom CollectionUriResolver. I'm awaiting more information. I haven't yet been able to recreate the issue (still working on this and will post updates if I can).
Our transforms use both fn:doc and fn:collection.
The code precompiles all possible transforms on application startup. These executables are shared.
For a given transform in a transaction, my code creates an XsltTransformer object via the compiled executable's .Load() call. This appears to create a new object looking at the 9.6 HE code (which is what I'd expect).
Next, my code creates new instances of a custom XmlResolver and CollectionUriResolver (haven't yet moved to CollectionFinder but think this may operate in the same manner) and these are populated with the appropriate request-specific docs/values/etc to feed into the transform.
These two resolvers have a lifetime of just one XSLT execution - they're not reused.
We associate the resolvers with the XsltTransform objects the only way I know how:
Saxon.Api.XsltTransformer transform = executable.Load();
transform.InitialContextNode = sourceData;
transform.Implementation.getConfiguration().
setCollectionURIResolver(collectionResolver);
transform.InputXmlResolver = inputResolver;
In the Saxon code, it looks like the input resolver is instance based and therefore not shared (ultimately appears to live in the Controller class as shown below, which itself is a new instance when the XsltTransformer is created via Load()).
public XmlResolver InputXmlResolver
{
set
{
controller.setURIResolver(new DotNetURIResolver(value));
}
}
However, I'm worried that the configuration data may be shared, and in setting the collection resolver on the configuration object (the CollectionFinder appears to be the same) we may have our concurrency problem.
What is the right way do achieve the outcome I'm after - for our custom resolvers to respond with request-specific behaviour? Can I use one pair of instances per transform with request specific data in, or do the resolvers have to be shared across requests (possibly injecting the request ID into the transform to form part of the URIs passed to the resolvers)?
Slight update
It appears you can set the CollectionURIResolver on both the controller ('Implementation') directly, or on configuration, and these are distinctly different objects in memory:
transform.Implementation.setCollectionURIResolver(collectionResolverOne);
transform.Implementation.getConfiguration().
setCollectionURIResolver(collectionResolverTwo);
However, at runtime, it's the configuration's resolver that is invoked (collectionResolverTwo in the case above). I'm not sure what purpose the controller's copy serves.
Additionally, it would appear that the configuration data is indeed shared, because if I create a 2nd transformer from the same executable and set it's collection resolver at configuration level, this updates the resolver used by the 1st transformer.
So - I think I've found my problem - I just now need to know the right thing to do in my scenario where I need the collection resolver to resolve the collections uniquely for each request (eg, one request might have five entries in a particular collection, another might have two).
I think that in explaining your problem you have essentially solved it yourself. The Configuration object in Saxon (which underpins the Processor at the API level) is shared, and after any initialization it's strongly recommended not to change its state using methods such as setURIResolver(), since such changes will affect work-in-progress in an undefined way.
Saxon has API objects and internal objects with a one-to-one correspondence, and the internal objects aren't 100% encapsulated because some users need access to more intimate functionality. In the Java world there are also JAXP classes that are conceptually similar, but restricted to XSLT 1.0 functionality. The correspondence is:
Shared information about the Saxon environment as a whole:
API: Processor Internal: Configuration JAXP: TransformerFactory
Reusable XSLT compiler containing options for compiling stylesheets:
API: XsltCompiler Internal: CompilerInfo JAXP: no equivalent
A compiled stylesheet, which can be executed repeatedly (and concurrently in multiple threads)
API: XsltExecutable Internal: Executable/PreparedStylesheet JAXP: Templates
A single transformation, transforming one source document using one stylesheet:
API: XsltTransformer Internal: Controller JAXP: Transformer
Some configuration options in Saxon are available only at the Configuration level, for example, the set of collation URIs available is defined at this level, and cannot be varied from one transformation to another.
However, URIResolvers can generally be defined at the XsltTransformer/Controller level. They can also be set on the Processor/Configuration, but that's just a default if you want to use the same one throughout. In your situation you should be setting them at the Controller level.
I have a validation.xml file from Struts, and am going to implement a server-side validation in .NET based on it. The validation.xml file is accompanied with a validationMessages.properties file. Are there any .NET libraries which are capable of performing a validation based on a Struts validation file?
In case this has never been done I'll have to either create such a class, since the validation file is too long and complex to be implemented as mere C# logic. Which begs the question: How would I even begin?
The end-goal is to be able to populate a C# class with properties for all fields, execute a validation method with that class as a parameter and have it return a list of validation error messages (or no errors in case of success).
I'd be surprised if anything like that existed; it's relatively unusual to move from Java -> .NET.
First, see if there are any custom validators. That code would need to be duplicated.
Then pick apart the different forms (or actions, depending on how they did validation). Put each of those into a C# class (but see below) rather than one giant one. I'm not sure what you mean by "A C# class with properties for all fields"; personally I'd go more granular.
Or just use an existing C# validation package and do a translator from Apache Commons Validation to the C# configuration (or code).
It should be a relatively straight-forward process since the validation config is well-known and documented, and all the code is available.
I'm new in web services and I'm developing a C# WCF service that is calling an external service from another company to get some client data (for example: name, address, phone, etc), this part is working fine so far.
The external service is based on a standard XML Schema, and other companies will have soon the same service generated from the same XML Schema, using the same name methods and returning the same type of xml file.
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc, or do I have to insert each on them manually as services reference in my internal service project every time I need to add a new one, then compile and re-deploy?
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true? So I’ll have to make a specific code to read the information I need from their data contracts for each new external company?? If this is true I have been thinking to make a generic code to read the raw xml, is this the best choice?
While C# is a compiled language it does support pluggin architecture through MEF. You could use this and add a small plugin .dll for each of your sources.
That being said it's quite possible that all you need is a configuration list containing connection details for each of your sources and connecting to them dynamically. That will only work if they're using the exact same schema, so that the objects they serve will serialize the same for all sources. You will have to instantiate the proxy dynamically through code using that configuration then, of course.
I should add something for your second question. As long as you're the one defining the contract, it doesn't matter if their actual objects are different. All you care about on your end is the xml they serve, and that you can connect using your representation. In fact, you can generate the contract as a .wsdl document. Each of the service-implementer can then generate domain objects from that. On the other hand if you're not the one "owning" the contract, some of the sources may decide to do it slightly differently, which will cause you a headache. Hopefully that's not your scenario though.
Best of luck! :)
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc
Unfortunately yes, you will have add service, compile it and deploy every time
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true?
If you will use auto generated every service will create different contracts. I would think about creating you own class and convert external classes using reflection and extension methods
My client-server communication looks like this: there are some so called annoucements which are seperate messages used to exchange information. The idea is that annoucement is the common part of every message. Actually I suppose it will be the type of the message. The type decide what is the content. In UML class diagram Annoucement would be the class all other messages inherit.
I want to implement that idea in communication between two applications one written in C++ the other in C#. I thought I could write a message that contain one field with the type if the message (an enum field) . All additional information relevant to the type would be implemented as an extensions.
I have found some examples how to use extensions in C++, however I have no clue how to do it in C#. I know there are interfaces IExtensible and IExtension (in protobuf-net) but how can I use them? Internet resources seem to be poor in the matter.
I suppose in the past messages in C# used to be define similiar to fashion that they are still defined in C++ apps (using proto file and protoc). Can I use the same proto file to define the message in C#? How? Will extenions be interpreted or overriden?
If I could implement extensions, I would sent a message, parse it, check the type and use approriate function to maintain it. That sounds to me cool because I wouldn't have to take care of the type of the message I was going to read - I don't have to know the type before parsing.
There are a number of ways you could do this. I'm not actually sure extensions is the one I would leap for, but:
in your message type, you could have a set of fully defined fields for each sub-message, i.e.
base-message
{1-5} common fields
{optional 20} sub-message 1
{optional 21} sub-message 2
{optional 22} sub-message 3
{optional 23} sub-message 4
sub-message 1
{1-n} specific fields
where you would have exactly one of the sub-message object
alternatively, encapsulate the common parts inside the more specific message:
common field type
{1-n} fields
sub-message 1
{1} common field type
{2-m} specific fields
Either approach would allow you to deserialize; the second is trickier, IMO, since it requires you to know the type ahead of time. The only convenient way to do that is to prefix each with a different identifier. Personally I prefer the first. This does not, however, require extensions - since we know everything ahead of time. As it happens, the first is also how protobuf-net implements inheritance, so you could do that with type inheritance (4 concrete sub-types of an abstract base message type)and [ProtoInclude(...)]
Re extension data; protobuf-net does support that, however as mentioned in the blog this is not included in the current v2 beta. It will be there soon, but I had to put a line somewhere. It is included in the v1 (r282) download though
Note that protobuf-net is just one of several C#/.NET implementations. The wire format is the same, but you might also want to consider the directly ported version. If I had to summarise the difference I would say "protobuf-net is a .NET serializer that happens to be protobuf; protobuf-csharp-port is a protobuf serializer that happens to be .NET" - they both achieve the same end, but protobuf-net focuses on being idiomatic to C#/.NET where-as the port focuses more on having the same API. Either should work here of course.
I'm writing some kind of Computing farm with central server giving tasks and nodes that compute them.
I wanted to write it in such way, that nodes don't know what exactly they are computing. They get (from server) an object that implements IComputable iterface, has one method, .compute() that returns IResult type object and send it to the server.
Server is responsible for preparing these object and serving them through .getWork() method on wcf service, and gets the results with .submitResult(IResult result) method.
Problem is, that worker nodes need to know not only the interface, but full object implementation.
I know that Java can serialize method (probably to bytecode) through RMI. Is it possible with c# ?
What you will have to do is put the type which implements the method you are describing into a separate assembly. You can then send the assembly as a byte array to your server, where it will load the assembly, insptect it for types that fit your interface, and then load them. This is the basic pattern for plug-ins using .Net.
Some care has to be taken though. If you are accepting code from arbitrary sources, you will have to lockdown what these loaded assemblies can do (and it is good practice to do even if you trust the source).
A good classic example for how to do this is the Terrarium project. It is a case study that Microsoft produced that involved the viral spreading of arbitrary assemblies in a secure fashion.
You can do
System.Expression.LambdaExpression<Func<result>> lambda = MyFunction;
and then you can serialize expression to string and deserialize on the server