Intercept and modify Ack response message BizTalk 2013 R2 - c#

I have written a custom pipeline component assembler to modify the response ACK HL7 message.
I have invoked Assemble(pContext) of Microsoft.Solutions.BTAHL7.Pipelines.HL72fAsm in the implemented method Assemble(pContext) of IAssemblerComponent interface
gives me result IBaseMessage
which is an HL7, then I do my manipulations on it to fix one of the fields and return that modified IBaseMessage.
All these works just fine, I tried EvenLogger to verify it.
But still the Sender application doesn't receive the modified message, it receives the auto-generated message.
Is there something I'm missing out, why do I not get the custom assembler result out from the SendPipeline of 2 way receive port
Note : BTAHL7 Configuration explorer is configured for original mode. The send pipeline on RequestResponse receive port is set to my custom pipeline

My suggestion is after all the more important points.
The first thing you're employer or customer should say is NO. That is invalid HL7 and you cannot support that.
But, if they are unable to unwilling to comply, the next thing you need to do is inform your management that their non-compliance will cost you a lot of extra time and money to accommodate. To fully support this change will likely cost more then implementing the business messages, I am totally serious. This is not a problem with BizTalk Server, you app or you.
Depending on the relationship, your management can legitimately ask them how they are going to pay for this customization. It's going to cost your side a lot more to break HL7 to comply with them than it will for them to fix it.
Next, and perhaps most important, due to the nature of it's message content, HL7 has very strict completeness requirements, which they are fundamentally breaking. The Trading Partner needs to fully document this requirement to take ownership of it because there is a huge consequence, they are breaking tracing/tracking on you end.
This means that it will be substantially more difficult to investigate and resolve messaging issues for you, not them. This might raise legal or compliance issues your side needs to be aware of.
So, provided you technical, medical and legal teams are all satisfied, the first thing I would try is a Pipeline Component that simply swaps the two values, MSH10 and MSA02. That way, they will receive both values.
Finally, here's a novel solution. Since this is their problem, and a problem for every one of their trading partners, what if you offer help them fix it. All then need to do is what I suggested, swap MSH10 and MSA02 on the received message.

Related

How to obey user-set conditions?

I am developing a custom Steam bot from scratch that will react to numerous callbacks emitted by Steam, like OnConnected, OnTradeOfferReceived etc. The callbacks contain parameters like IDs or data.
I wish to give the user freedom to define how should the system react when a specified callback is received.
This can be easily solved by forcing the user to manually program the "reacting" parts, but I really wish to avoid that, because a big part of the possible user base are not programmers at its slightest.
The already existing SteamBot on GitHub does this, leading to questions like "how to build SteamBot.sln".
I thought of a GUI for specifying conditions and executing actions if the conditions are true, but I fail to come up with how to parse them in code without going through each and every option.
By actions, I mean replying to a trade offer, sending a chat message to someone, adding an item to a live trade etc.
Maybe the GUI should generate the actual code (based on user's input) and recompile the bot? Any help or suggestions would be appreciated.

Unified communication

I am wondering how should I set up interprocess unified communication in better way than I do now. Client process sends a lot of messages of different sort to the server process. Messages like... I have done some work[what],I started at [time], ended at[time]. or state, progress even command messages.
example message: From:Process1;StartedAt|12:12:12;EndedAt|12:45:56;DoneUnit:51
Server parser split string by semicolon. From first part reads from who was message sent. from second and third part it reads times and from last how much work it did.
When I add another info at the end of message
ex. From:Process1;StartedAt|12:12:12;EndedAt|12:45:56;DoneUnit:51;Source:tableT
I have to rewrite the server parser as well.
Server tries to parse received message using my own parse function. Every message has its own format. So parser know how should message look. But if I change the format on client I have to change it on server as well. It does not seems to be very efficient way.
For that reason I ask you a question.
How should this communication get better or is there any different approach how to store the format for client and server on one place?
I use c# .net 3.5(Must be this version)
Thank you for reply
The obvious solution to your problem would be to not write the parsing code yourself.
If you create a class that can be serialized, you can send the serialized version of the class over the wire and deserialize at the other end. That means the message class can be shared between both applications, and the parsing code is trivial. Depending on your requirements, you can use various serializers: Xml or JSON would be verbose but human-readable, or the binary serializer would be more efficient in terms of bandwidth (but harder to debug or monitor over-the-wire).

Validating your site

What else needs to be validated apart from what I have below? This is my question.
It is important that any input to a site is properly validated:
Textboxes, etc – use .NET validators (or custom code if the validators aren’t appropriate)
Querystring or Form values – use manual validation (casting to specific types, boundary checking, etc)
This ties into the problems which XSS can reveal.
Basically you have to validate any input that someone could potentially tamper with:
Form Postbacks (mainly .NET Controls – these can be validated with .NET validation controls. Also if you have Request Validation turned on on all pages, this reduces the risk )
QueryString Values
Cookie values
HTTP Headers
Viewstate (automatically done for you as long as you have ViewState MAC enabled)
Javascript (all JS can be viewed and changed, so need to ensure no crucial functionality is handled by JavaScript- i.e. always enable server side validation)
There is a lot that can go wrong with a web application. Your list is pretty comprehensive, although it is duplication. The http spec only states, GET, POST, Cookie and Header. There are many different types of POST, but its all in the same part of the request.
For your list I would also add everything having to do with file upload, which is a type of POST. For instance, file name, mime type and the contents of the file. I would fire up a network monitoring application like Wireshark and everything in the request should be considered potentially harmful.
There will never be a one size fits all validation function. If you are merging sql injection and xss sanitation functions then you maybe in trouble. I recommend testing your site using automation. A free service like Sitewatch or an open source tool like skipfish will detect methods of attack that you have missed.
Also, on a side note. Passing the view state around with a MAC and/or encrypted is a gross misuse of cryptography. Cryptography is tool used when there is no other solution. By using a MAC or encryption you are opening the door for an attacker to brute force this value or use something like oracle padding attack to take advantage of you. A view state should be kept track by the server, period end of story.
I would suggest a different way of looking at the problem that is orthogonal to what you have here (and hence not incompatible, there's no reason why you can't examine it both ways in case you catch with one what you miss with another).
The two things that are important in any validation are:
Things you pay attention to.
Things you pass to another layer untouched.
Now, most of the things you've mentioned so far fit into the first cateogry. Cookies that you ignore fit into the second, as would query & post information if you passed to another handler with Server.Execute or similar.
The second category is the most debatable.
On the one hand, if a given handler (.aspx page, IHttpHandler, etc.) ignores a cookie that may be used by another handler at some point in the future, it's mostly up to that other handler to validate it.
On the other hand, it's always good to have an approach that assumes other layers have security holes and you shouldn't trust them to be correct, even if you wrote them yourself (especially if you wrote them yourself!)
A middle-ground position, is that if there are perhaps 5 different states some persistant data could validly be in, but only 3 make sense when a particular piece of code is hit, it might verify that it is in one of those 3 states, even if that doesn't pose a risk to that particular code.
That done, we'll concentrate on the first category.
Querystrings, form-data, post-backs, headers and cookies all fall under the same category of stuff that came from the user (whether they know it or not). Indeed, they are sometimes different ways of looking at the same thing.
Of this, there is a subset that we will actually work upon in any way.
Of that there is a range of legal values for each such item.
Of that, there is a range of legal combinations of values for the items as a whole.
Validation therefore becomes a matter of:
Identify what input we will act upon.
Make sure that each component of that input is valid in its own right.
Make sure that the combinations are valid (e.g it may be valid to not send a credit card number, but invalid to not send one but set payment type to "credit card").
Now, when we come to this, it's generally best not to try to catch certain attacks. For example, it's not so good to avoid ' in values that will be passed to SQL. Rather, we have three possibilities:
It's invalid to have ' in the value because it doesn't belong there (e.g. a value that can only be "true" or "false", or from a set list of values in which none of them contain '). Here we catch the fact that it isn't in the set of legal values, and ignore the precise nature of the attack (thus being protected also from other attacks we don't even know about!).
It's valid as human input, but not as what we will use. An example here is a large number (in some cultures ' is used to separate thousands). Here we canonicalise both "123,456,789" and "123'456'789" to 123456789 and don't care what it was like before that, as long as we can meaningfully do so (the input wasn't "fish" or a number that is out of the range of legal values for the case in hand).
It's valid input. If your application blocks apostrophes in name fields in an attempt to block SQL-injection, then it's buggy because there are real names with apostrophes out there. In this case we consider "d'Eath" and "O'Grady" to be valid input and deal with the fact that ' is significant in SQL by escaping properly (ideally by using an API for data access that will do this for us.
A classic example of the third point with ASP.NET is code that blocks "suspicious" input with < and > - something that makes a great number of ASP.NET pages buggy. Granted, it's better to be buggy in blocking that inappropriately than buggy by accepting it inappropriately, but the defaults are for people who haven't thought about validation and trying to stop them from hurting themselves too badly. Since you are thinking about validation, you should consider whether it's appropriate to turn that automatic validation off and then treat < and > in a manner appropriate for your given use.
Note also that I haven't said anything about javascript. I don't validate javascript (unless perhaps I was actually receiving it), I ignore it. I pretend it doesn't exist and then I won't miss a case where its validation could be tampered with. Pretend yours doesn't exist at this layer too. Ultimately client-side validation is to save the good guys making honest mistakes time, not to twart the bad guys.
For similar reasons, this is best not tested through a browser. Use Fiddler to construct requests that hit the validation points you want to examine. This way all client-side validation is by-passed, and you're looking at the server the same way an attacker will.
Finally, remember that a page with 100% perfect validation is not necessarily secure. E.g. if your validation is perfect but your authentication poor then someone can send "valid" code to it that will be just - perhaps more - nasty as the more classic SQL-injection of XSS code. That hits onto other topics that are for other questions, except that validation as discussed here is only part of the puzzle.

Should Exception Messages be Globalized

I'm working on a project and I'm just starting to do all the work necessary to globalize the application. One thing that comes up quite often is whether to globalize the exception messages, but ensuring that string.Format uses CultureInfo.CurrentCulture instead of CultureInfo.InvariantCulture. Additionally this would mean that exception messages would be stored in resource files that can be marked as culture-specific.
So the question is, should exception messages be globalized or should be be left in either the InvariantCulture or the author's country; in my case en-US.
Exception messages should rarely be displayed directly to the user. You need to think of the consumer for each string. Obviously pieces of text in the user interface need internationalizing, but if an exception message is only going to be seen by support (or is going to be visible to the user and then emailed to support when they click a button) then where's the benefit of translating it?
If you go too far, you could not only waste time and effort (and i18n can take a lot of effort) but you'll also make your support life harder as well. You really don't want to have to read log files written in a foreign language and translate them back to your native tongue.
It makes sense for Microsoft to internationalize their exception messages, because they will be read by developers from all over the world - but unless you're multinational with developers in multiple countries who don't share a common language, I wouldn't translate message which are really meant for dev/support.
typically, I don't.
Globalize strings that may be seen by a user, and you don't let your exception messages percolate up to the UI, right?
Right? :)
If you are going to be the one to deal with the exceptions, then either leave them in a language you can understand, or give them codes so you can look them up in your native language.
I assume by globalize, you mean i18n compliant which is usually called internationalize. Yes, internationalize all visible parts of the GUI, including diagnostic messages. The log file, which is where developers should go to get the real information such as the stack trace, should not be internationalized.

WCF Passthrough

I have an N-tier structure composed of WCF nodes. I need to occasionally pass very large volumes of data from a terminal node to the top node and I would like to avoid deserializing the very large data field during the intermediate hops. I can't pass directly to the top due to our fall over strategy. Is there any way to avoid deserializing my field? Thanks for any help
Maybe you can do something with a [OnDeserializing] event?
See this.
Also, the serialization events are covered in "Programming WCF Services" (2nd Ed) by Juval Lowy in Chapter 3, pgs 107-110.
I'm not sure if you can completely short-circuit deserialization though... I've never tried.
I think Terry's on the right track. I would look at that event and by using a message contract you should be able to mark the part of the message you just want to pass through. You'll probably need to do some message manipulation (tear apart the incoming message, create a "custom" outgoing message) but you should be able to have the message continue on without being looked at.
Do a search WS-Addressing too; it may provide a pattern for doing this.
I wonder if your failover strategy would be amenable to a "snapping the link" sort of thing. You would make your initial call to the intermediate node, which would eventually forward it to the terminal node. The terminal node would respond with the information necessary for the initial node to connect to it directly.
That way, load balancing or failover could determine which terminal node should be used, but after that determination is made, a direct connection could take place. Of course, you'd want to limit the duration of that direct connection to allow the failover strategy to change its mind over time.

Categories