I have a send port that receives a document with a set of promoted context properties. The adapter on the send port is set as WCF-SQL, and have been configured to connect to the SQL server.
The only part missing is configuring the messages tab, so that the correct message is being sent to the database. Right now I simply have some hardcoded values along with the message itself:
<bizSaveDocument xmlns="http://schemas.microsoft.com/Sql/2008/05/Procedures/dbo">
<conversationID>547e0702-c0c8-4535-9ab0-fa52b2fdbdd0</conversationID>
<dataType>OIO</dataType>
<fromID></fromID>
<toID></toID>
<msgInfoExtension><![CDATA[<infoExt><fileInfo fileName="ublinvoice.xml" encoding="utf-8" /></infoExt>]]></msgInfoExtension>
<msgBody><bts-msg-body xmlns="http://www.microsoft.com/schemas/bts2007" encoding="string"/></msgBody>
<msgBodyBin></msgBodyBin>
</bizSaveDocument>
I'm unsure how to properly insert my promoted context properties into these elements. To give an idea of where I want to configure this XML, see the screenshow below:
I cannot use the body option, since I need to insert some promoted properties into the database. Looking at the MSDN there seems to be no explanation of how to accomplish this. See this link: https://learn.microsoft.com/en-us/biztalk/core/specifying-the-message-body-for-the-wcf-adapters
For the receiving message, I created a pipeline component which promoted the required properties and works fine.
Is this simply not possible in standard Biztalk? If not, I will need to create an additional pipeline component to handle the sending.
Ah, ok, I see what you're doing....so....don't do it this way.
The best and essentially correct way to handle this is with a normal BizTalk flow with Maps and an Orchestration. Remember, there is nothing wrong with using an Orchestration, if someone is telling you to not use Orchestrations, they are, well, wrong.*
Basically, Map to you SQL Schema using temp values, then set them from the Context using Distinguished fields.
Don't ever bother with the Messages Tab, it's basically hiding code where it should never be.
If they still make you do it some other way, you need to tell your management that this will take you about twice as long to implement because you have to create a anti-pattern that replicates built in functionality.
Related
Is it recommended practice to implement the below endpoint using 'PUT' verb to create & update a resource?
PUT/jobs/{jobid}
(or)
POST/jobs - to create resource
PUT/jobs/{jobid} - only to update the existing record
Mixing up create & update logic in PUT endpoint may create issue in the endpoint consumer side as PUT is idempotent while POST is NOT idempotent.
What are the other consequences if I mix up create & update resource logic with in 'PUT' endpoint?
Point me to any relevant RFCs, if any.
The HTTP PUT verb is used to update a resource, but it can also be used to create a resource if the resource does not already exist, but it's bad practise as it goes against its meaning.
POST is not idempotent, while PUT is idempotent. This means that multiple identical POST requests may create multiple resources, while multiple identical PUT requests should update the same resource each time.
If you want to support both creating and updating a resource using the same endpoint, you can use the POST verb for both operations and include an additional parameter or field in the request to indicate whether you are creating or updating the resource.
you can refer to the HTTP 1.1 specification (RFC 7231): https://tools.ietf.org/html/rfc7231#section-4.3
Mixing up create & update logic in PUT endpoint may create issue in the endpoint consumer side as PUT is idempotent while POST is NOT idempotent.
It shouldn't introduce any client issues.
An important constraint in REST is the uniform interface, which means (among other things) that everybody understands message semantics the same way. In the context of HTTP, that means that everybody agrees that HTTP PUT means... whatever the current standard says it means.
The current registered reference for HTTP PUT is RFC 9110:
The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message content.
A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being sent in a 200 (OK) response.
In other words, PUT is a lot like "save file"; it's the HTTP method we would use if we were using HTTP to publish a new page to our website.
The uniform interface constraint tells us that our HTTP APIs should understand messages exactly the same way that a general purpose web server would understand them.
The power that gives us is that it allows us to use general purpose components (browsers, caches, proxies) without needing to know anything about the semantics of the resource or its representation.
(Note: the important thing to recognize is agreeing on what the messages mean doesn't mean that the server needs to do a specific thing. See Fielding 2002 on the semantic constraints of HTTP GET; the principle is general to all standardized HTTP methods).
Now, you can use HTTP POST if you prefer (see Fielding, 2009). The problem is that POST semantics allow a lot more freedom, which restricts a general purpose component from doing intelligent things because it doesn't know enough about what is going on.
For example, on an unreliable network an HTTP response may be delayed or lost. Because the semantics of PUT describe an idempotent action, general purpose clients can know that it is safe to try sending the request again. POST, on the other hand, doesn't imply that constraint, and therefore general purpose components shouldn't automatically retry those requests.
But it's a trade off - POST limits what a general purpose component can do in response to a contingency, but maybe it is worth it if that means your API is more familiar to the human developers who are going to use it, or if it makes life easier for the operators keeping your API running, or whatever.
if PUT can create or update a record then how it should be idempotent?
Because idempotent, in HTTP means:
the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
It's a lot like how we use maps/dictionaries/associative-arrays to store information
dict["readme.txt"] = "Hello World"
and
dict["readme.txt"] = "Hello World"
dict["readme.txt"] = "Hello World"
dict["readme.txt"] = "Hello World"
Call it once, call it twice, call it thrice, the end result is the same: we have this specific value stored under this specific key.
That's really what PUT means; the target URI is the key, the request body is the value. "Please make your document look like my document".
I am attempting to bind to a non-transactional MSMQ queue using MSMQIntegrationBinding. What I want to be able to do is to peek at the message, and then if conditions are right, go ahead and process it. But, if conditions are not right, I want to leave the message on the queue.
I came upon ReceiveContextEnabled=True, but have seen very little documentation or tutorials on how to actually use this. I am hosting the WCF service library through a Windows Service. However, when I open my host, I get an error like:
The contract IWMInTranslator_Service_MSMQ has at least one operation annotated with ReceiveContextEnabledAttribute, but the binding used for the contract endpoint at address msmq.formatname:DIRECT=OS:CCNU404CCH5%5Cprivate$%5Cwmintranslateque does not support required binding property 'IReceiveContextSettings'. Please ensure that the binding used for the contract supports the ReceiveContext capability.
If I change it to a transactional queue it seems to work, but I dont want to do this. Can anybody help? I create my endpoints and bindings via code (not through app.config). If there are some properties somewhere I can change that would be great!
Thanks,
:) David
Many classes in the .NET framework (especially in the socket/network classes, which is what I'm looking at) use System.Net.GlobalLog (an internal class) to log trace messages somewhere. You can view example uses of things like GlobalLog.Assert and GlobalLog.Print in the SslState class:
SslStream source code
This is different from the System.Net.Logging (also internal) class, uses of which can also be found throughout the socket/network classes.
For System.Net.Logging, I know I can use a <system.diagnostics> configuration block in App.Config and that will result in System.Net.Logging messages getting logged if configured properly. However, this does not appear to influence System.Net.GlobalLog.
After searching around for about an hour, I cannot seem to find any information about locating the output of System.Net.GlobalLog. Does anyone know how to locate/view/control the output of this?
As you stated, GlobalLog is an internal class to the System.Net assembly. Without being able to modify the System.Net assembly you won't get access to that class.
That said, you might want to review the following: http://www.123aspx.com/rotor/RotorSrc.aspx?rot=42941
It looks like you have to have compiler flags TRAVE and DEBUG set in order to get it to work.. but I'm not seeing where it actually does anything with the logged information. The comments suggest that it is supposed to look for an environment variable setting and dump the log to a text file somewhere on the system; however the code at that page seems either incomplete or it simply wasn't finished.
My guess is that you need to find some other way of getting access to the logging info you want.
I don't have enough rep to put this as a comment on the existing answer but I wanted to figure out what was happening in the TcpClient and Socket classes but also found they couldn't be stepped into. The closest I got was by monitoring the Windows API calls that were being made. I used a freeware tool called API Monitor found here: http://www.rohitab.com/apimonitor.
I'm wondering if I can define the location of the error queue for my application using the fluent (Configure.With()) syntax?
Note this has changed in nservicebus3 to be configured via MessageForwardingInCaseOfFaultConfig
There is no easy way since we want to push users to put that setting in the config so that OPS can change it without a recompile. That said you can override where NSB reads the setting and put that in code instead. Do this by implementing:
IProvideConfiguration
Here is an example on how to do it:
https://github.com/NServiceBus/NServiceBus/blob/master/Samples/PubSub/Subscriber1/ConfigOverride.cs
I'm trying to send a message with MassTransit over MSMQ. The message contains two properties which are types obtained from an NHibernate query and contain Castle Proxies (for lazy loading).
If I send the message (using bus.Endpoint.Send(msg)) with the proxies as part of the message I generate a StackOverflowException. If I don't assign these two properties, and leave them null, the message fires through the queue without issue.
Is this just the way it is, or am I doing something wrong with the MSMQ/MassTransit setup?
If not, would I need to use something like AutoMapper to get rid of these proxies?
This is likely an exception based upon the dynamic proxies generated and the serializer being used. I assume it's the default XML serializer? I would post an issue to the github page for MT so we can look at this: https://github.com/MassTransit/MassTransit
These messages should be consider contracts for decoupling between processes. Using NHibernate entities, these services become coupled with more than just the messages as a DB change could effect the other consumers. Ideally you would always map this to another object before passing it along.
Is there a reason why you aren't just bus.Publish(msg) instead of sending directly to the Bus' endpoint? You could join the MT mailing list and discuss this in more detail: http://groups.google.com/group/masstransit-discuss
I hope this helps!