Deserializing ServiceBus content in Azure Logic App - c#

I'm trying to read the content body of a message in an Azure Logic App, but I'm not having much success. I have seen a lot of suggestions which say that the body is base64 encoded, and suggest using the following to decode:
#{json(base64ToString(triggerBody()?['ContentData']))}
The base64ToString(...) part is decoding the content into a string correctly, but the string appears to contain a prefix with some extra serialization information at the start:
#string3http://schemas.microsoft.com/2003/10/Serialization/�3{"Foo":"Bar"}
There are also some extra characters in that string that are not being displayed in my browser. So the json(...) function doesn't accept the input, and gives an error instead.
InvalidTemplate. Unable to process template language expressions in
action 'HTTP' inputs at line '1' and column '2451': 'The template
language function 'json' parameter is not valid. The provided value
#string3http://schemas.microsoft.com/2003/10/Serialization/�3{"Foo":"bar" }
cannot be parsed: Unexpected character encountered while parsing value: #. Path '', line 0, position 0.. Please see https://aka.ms/logicexpressions#json for usage details.'.
For reference, the messages are added to the topic using the .NET service bus client (the client shouldn't matter, but this looks rather C#-ish):
await TopicClient.SendAsync(new BrokeredMessage(JsonConvert.SerializeObject(item)));
How can I read this correctly as a JSON object in my Logic App?

This is caused by how the message is placed on the ServiceBus, specifically in the C# code. I was using the following code to add a new message:
var json = JsonConvert.SerializeObject(item);
var message = new BrokeredMessage(json);
await TopicClient.SendAsync(message);
This code looks fine, and works between different C# services no problem. The problem is caused by the way the BrokeredMessage(Object) constructor serializes the payload given to it:
Initializes a new instance of the BrokeredMessage class from a given object by using DataContractSerializer with a binary XmlDictionaryWriter.
That means the content is serialized as binary XML, which explains the prefix and the unrecognizable characters. This is hidden by the C# implementation when deserializing, and it returns the object you were expecting, but it becomes apparent when using a different library (such as the one used by Azure Logic Apps).
There are two alternatives to handle this problem:
Make sure the receiver can handle messages in binary XML format
Make sure the sender actually uses the format we want, e.g. JSON.
Paco de la Cruz's answer handles the first case, using substring, indexOf and lastIndexOf:
#json(substring(base64ToString(triggerBody()?['ContentData']), indexof(base64ToString(triggerBody()?['ContentData']), '{'), add(1, sub(lastindexof(base64ToString(triggerBody()?['ContentData']), '}'), indexof(base64ToString(triggerBody()?['ContentData']), '}')))))
As for the second case, fixing the problem at the source simply involves using the BrokeredMessage(Stream) constructor instead. That way, we have direct control over the content:
var json = JsonConvert.SerializeObject(item);
var bytes = Encoding.UTF8.GetBytes(json);
var stream = new MemoryStream(bytes);
var message = new BrokeredMessage(stream, true);
await TopicClient.SendAsync(message);

You can use the substring function together with indexOf and lastIndexOf to get only the JSON substring.
Unfortunately, it's rather complex, but it should look something like this:
#json(substring(base64ToString(triggerBody()?['ContentData']), indexof(base64ToString(triggerBody()?['ContentData']), '{'), add(1, sub(lastindexof(base64ToString(triggerBody()?['ContentData']), '}'), indexof(base64ToString(triggerBody()?['ContentData']), '}')))))
More info on how to use these functions here.
HTH

Paco de la Cruz solution worked for me, though I had to swap out the last '}' in the expression for a '{', otherwise it finds the wrong end of the data segment.
I also split it into two steps to make it a little more manageable.
First I get the decoded string out of the message into a variable (that I've called MC) using:
#{base64ToString(triggerBody()?['ContentData'])}
then in another logic app action do the substring extraction:
#{substring(variables('MC'),indexof(variables('MC'),'{'),add(1,sub(lastindexof(variables('MC'),'}'),indexof(variables('MC'),'{'))))}
Note that the last string literal '{' is reversed from Paco's solution.
This is working for my test cases, but I'm not sure how robust this is.
Also, I've left it as a String, I do the conversion to JSON later in my logic app.
UPDATE
We have found that just occasionally (2 in several hundred runs) the text that we want to discard can contain the '{' character.
I have modified our expression to explicitly locate the start of the data segment, which for me is:
'{"IntegrationRequest"'
so the substitution becomes:
#{substring(variables('MC'),indexof(variables('MC'),'{"IntegrationRequest"'),add(1,sub(lastindexof(variables('MC'),'}'),indexof(variables('MC'),'{"IntegrationRequest"'))))}

Related

Send XML message to MSMQ without any formatting

I need to send an XMLDocument object to MSMQ. I don’t have a class to deserialize it into (the xml may vary). The default formatter, XMLMessageFormatter, will “pretty print” the object. This causes a problem since
<text></text>
Will be converted to
<text>
</text>
(I.e. cr + spaces). The message is being read by a process using the default XMLMessageFormatter and hasn’t been an issue whilst nodes have data in them. This is an issue, however, further down the line, as a process (out of my control) will interpret these new characters as data and cause an error.
I know I could write some code to convert them using IsEmpty = true giving <text /> but I’d like a solution that doesn’t alter the object at all.
BinaryMessageFormatter will prefix the data with BOM data (receiver is not expecting that) and ActiveXMessageFormatter will double byte the string (again causing issues the other end).
I would rather like to avoid having to write a custom message formatter. I’ve tried including some options in the XMLMessageFormatter but they’ve had little effect. Any ideas would be very much appreciated.
MSMQ operates on raw blobs. You do not have to use a formatter unless you want to.
To send a message and get it back byte-for-byte identical, use the BodyStream property.
Example:
var queue = new MessageQueue(#".\private$\queueName");
var msg = new Message();
msg.BodyStream = new MemoryStream(Encoding.UTF8.GetBytes("<root><test></test></root>"));
queue.Send(msg);
Resultant message:

C# - Deserializing when whitespace between tags is delimited

I am posting some XML to an API Gateway method in AWS, which has an integration to SNS. An SQS queue is then subscribed to the topic; and I have a C# process which polls the queue intermittently and needs to deserialize the XML.
The trouble is, the whitespace between the XML tags ends up getting encoded along the line somewhere, so tabs become \t and new lines become \r\n. But these end up as physical tokens inside the string.
Example XML which is posted to API Gateway:
<?xml version="1.0" encoding="utf-8"?>
<ProfileInformation>
<Username>bgs264</Username>
</ProfileInformation>
String which is read off the SQS queue:
<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<ProfileInformation>\n\t<Username>bgs264</Username>\n</ProfileInformation>
Note that the attributes in the declaration end up as \" and the whitespace posted ends up as \t, \r\n, etc.
However these aren't "the strings appearing as such in the debugger, but it's actually a tab", they are actually like this in the string.
So when I try to deserialize, using
using (var reader = new StringReader(message))
var myObj = serializer.Deserialize(reader) as ProfileInformation);
I get:
InvalidOperationException: There is an error in XML document (1, 15).
It refers to the first \ character in the declaration, as in version=\"1.0\"
My immediate idea was to simply string.Replace \t to empty string, etc, but that's unacceptable because it might be valid that the user's username is actually is bgs\t264 and the replace here would cause an inconsistency. In this example, I presume I would get bgs\\t264 in the message, so a replace would leave me, erroneously, with bgs\264 for example.
So I need to fix these \n\t characters where they occur between XML tags.
For what it's worth, I also have a lambda written in Go which has no problem with this and simply deserializes the exact same string straight into XML. So it must be possible.
My intial thoughts:
Can I somehow decode the string before passing it for
deserialization? I tried this with HttpUtility.DecodeHtml but I
don't think it's actually HTML that I'm trying to decode!
Is there a different XML library I can use that would work?
I would guess, and some googling seems to support the theory, that the message you're seeing has been converted to JSON & the escape sequences are as a consequence of that.
The ideal approach would be to investigate and prevent this from happening. I don't know enough about SNS to advise & you indicate this is a non-starter, so the simplest approach would be to reverse this process once you receive the message.
You can use a JSON library like Json.NET to do this:
var jsonString = string.Format("\"{0}\"", message);
var xmlString = JsonConvert.DeserializeObject<string>(jsonString);
using (var reader = new StringReader(xmlString))
{
var profileInformation = (ProfileInformation) serializer.Deserialize(reader);
}

Weird behavior C#

Somehow I'm getting a weird result from a GetString(). So, in my project I got this code:
byte[] arrayBytes = System.Convert.FromBase64String(n["spo_fdat"].InnerText);
string str = System.Text.Encoding.UTF8.GetString(arrayBytes);
The InnerText Value and the code is in: https://dotnetfiddle.net/mMUlti
So, my problem is that somehow I'm getting this result on my Visual Studio:
While in the online compiler that I post above the output is as expected.
This output is an output for a printer and this \0 are destroying the format.
Anyone have a clue of what is going on and what should I do/try?
It looks like for some reason every other byte in your input is null. If you strip those out you get something that looks much more plausible as printer commands (though I am no expert). Hopefully you can verify things...
To do this all I did was added this line in:
arrayBytes = arrayBytes.Where((x,i)=>i%2==0).ToArray();
The where command takes the value (x), and index (i) and if the index mode 2 is 0 (ie its even) then the where clause allows it - if its odd it throws it away.
The output I get from this starts:
CT~~CD,~CC^~CT~
^XA~TA000~JSN^LT0^MNW^MTT^PON^PMN^LH0,0^JMA^PR2,2~SD15^JUS^LRN^CI0^XZ
^XA
^MMT
^PW607
^LL0406
There are some non-printing character in there too that look like possible printing commands (eg 16 is the first character that is "data link escape" character.
Edited afterthought:
The problem you have here is obviously a problem with the specification. It seems to be that your input is wrong. You need to talk to whoever generated it find out the specification they are using to generate it, make sure their ode matches that spec and then right your code to accept that spec. With a solid specification you should both be writing compatible code.
Try inspecting the bytes instead. You'll see that what you have encoded in the base-64 string is much closer to what Visual Studio shows to you in comparison to the output from dotnetfiddle. Consoles usually don't escape non-printables (such as \0 - the null character) whereas Visual Studio string inspector does so in attempt to provide as much value to its user as possible.
Looking at your base-64 encoded data, it looks way more like UTF-16 than UTF-8. If you decode it like so, you'll perhaps get rid of the null characters in Visual Studio inspector as well.
Regardless of that, the base-64 data don't make much sense. More semantical context is required to figure out what the issue is.
According to inspection by Chris, it looks like the data is UTF-8 encoded in UTF-16.
You should be able to get proper results with the following:
var xml = //your base-64 input...
var arrayBytes = Convert.FromBase64String(xml);
var utf16 = Encoding.Unicode.GetString(arrayBytes);
var utf8Bytes = utf16.Select(c => (byte)c).ToArray();
var utf8 = Encoding.UTF8.GetString(utf8Bytes);
Console.WriteLine(utf8);
The opposite is probably how your input was created. However, you could also go for Chris' solution of ignoring every odd byte as it is basically the same with less weird encoding things going on (although this may be more explicit to what really goes on: UTF-8 inside UTF-16).

cleaning JSON for XSS before deserializing

I am using Newtonsoft JSON deserializer. How can one clean JSON for XSS (cross site scripting)? Either cleaning the JSON string before de-serializing or writing some kind of custom converter/sanitizer? If so - I am not 100% sure about the best way to approach this.
Below is an example of JSON that has a dangerous script injected and needs "cleaning." I want a want to manage this before I de-serialize it. But we need to assume all kinds of XSS scenarios, including BASE64 encoded script etc, so the problem is more complex that a simple REGEX string replace.
{ "MyVar" : "hello<script>bad script code</script>world" }
Here is a snapshot of my deserializer ( JSON -> Object ):
public T Deserialize<T>(string json)
{
T obj;
var JSON = cleanJSON(json); //OPTION 1 sanitize here
var customConverter = new JSONSanitizer();// OPTION 2 create a custom converter
obj = JsonConvert.DeserializeObject<T>(json, customConverter);
return obj;
}
JSON is posted from a 3rd party UI interface, so it's fairly exposed, hence the server-side validation. From there, it gets serialized into all kinds of objects and is usually stored in a DB, later to be retrieved and outputted directly in HTML based UI so script injection must be mitigated.
Ok, I am going to try to keep this rather short, because this is a lot of work to write up the whole thing. But, essentially, you need to focus on the context of the data you need to sanitize. From comments on the original post, it sounds like some values in the JSON will be used as HTML that will be rendered, and this HTML comes from an un-trusted source.
The first step is to extract whichever JSON values need to be sanitized as HTML, and for each of those objects you need to run them through an HTML parser and strip away everything that is not in a whitelist. Don't forget that you will also need a whitelist for attributes.
HTML Agility Pack is a good starting place for parsing HTML in C#. How to do this part is a separate question in my opinion - and probably a duplicate of the linked question.
Your worry about base64 strings seems a little over-emphasized in my opinion. It's not like you can simply put aW5zZXJ0IGg0eCBoZXJl into an HTML document and the browser will render it. It can be abused through javascript (which your whitelist will prevent) and, to some extent, through data: urls (but this isn't THAT bad, as javascript will run in the context of the data page. Not good, but you aren't automatically gobbling up cookies with this). If you have to allow a tags, part of the process needs to be validating that the URL is http(s) (or whatever schemes you want to allow).
Ideally, you would avoid this uncomfortable situation, and instead use something like markdown - then you could simply escape the HTML string, but this is not always something we can control. You'd still have to do some URL validation though.
Interesting!! Thanks for asking. we normally use html.urlencode in terms of web forms. I have a enterprise web api running that has validations like this. We have created a custom regex to validate. Please have a look at this MSDN link.
This is the sample model created to parse the request named KeyValue (say)
public class KeyValue
{
public string Key { get; set; }
}
Step 1: Trying with a custom regex
var json = #"[{ 'MyVar' : 'hello<script>bad script code</script>world' }]";
JArray readArray = JArray.Parse(json);
IList<KeyValue> blogPost = readArray.Select(p => new KeyValue { Key = (string)p["MyVar"] }).ToList();
if (!Regex.IsMatch(blogPost.ToString(),
#"^[\p{L}\p{Zs}\p{Lu}\p{Ll}\']{1,40}$"))
Console.WriteLine("InValid");
// ^ means start looking at this position.
// \p{ ..} matches any character in the named character class specified by {..}.
// {L} performs a left-to-right match.
// {Lu} performs a match of uppercase.
// {Ll} performs a match of lowercase.
// {Zs} matches separator and space.
// 'matches apostrophe.
// {1,40} specifies the number of characters: no less than 1 and no more than 40.
// $ means stop looking at this position.
Step 2: Using HttpUtility.UrlEncode - this newtonsoft website link suggests the below implementation.
string json = #"[{ 'MyVar' : 'hello<script>bad script code</script>world' }]";
JArray readArray = JArray.Parse(json);
IList<KeyValue> blogPost = readArray.Select(p => new KeyValue {Key =HttpUtility.UrlEncode((string)p["MyVar"])}).ToList();

Parsing XML in VB.Net is failing due to a special character

I have some VB.Net code which is parsing an XML string.
The XML String comes from a TCP 3rd Party stream and as such we have to take the data we get and deal with it.
The issue we have is that one of the elements data can sometimes contain special characters e.g. &, $ , < and thus when the “XMLDoc.LoadXml(XML)” is executed it fails - note XMLDoc is configured as "Dim XMLDoc As XmlDocument = New XmlDocument()".
Have tried to Google answers for this but I am really struggling to find a solution. Have looked at a RegEX but realised this has some limitations; or I just dont understand it enough lol.
If it helps here is an example of XLM we would have streamed to us (just for info the message tag comes from an SMS message):-
(if it helps the only bit that will ever have an error is (and all I have to check) the <Message>O&N</Message> section so in this case the message has come in with an &)
<IncomingMessage><DeviceSendTime>19/02/2013 14:00:50</DeviceSendTime>
<Sender>0000111111</Sender>
<Status>New</Status>
<Transport>Sms</Transport>
<Id>-1</Id>
<Message>O&N</Message>
<Timestamp>19/02/2013 14:00:50</Timestamp>
<ReadTimestamp>19/02/2013 14:00:50</ReadTimestamp>
</IncomingMessage>
If we're looking specifically within Message elements, and assuming there are no nested elements within the Message element:
Dim url = "put url here"
Dim s As String
Dim characterMappings = New Dictionary(Of String, String) From {
{"&", "&"},
{"<", "<"},
{">", ">"},
{"""", """}
}
Using client As New WebClient
s = client.DownloadString(url)
End Using
s = Regex.Replace(s,
"(?:<Message>).*?(" & String.Join("|", characterMappings.Keys) & ").*?(?:</Message>)",
Function(match) characterMappings(match.Groups(1).Value)
)
Dim x = XDocument.Parse(s)
$ should not be an issue with XML, but if it is you can add it to the dictionary.
Use of WebClient comes from here.
Updated
Since $ has special meaning in regular expressions, it cannot be simply added to the dictionary; it needs to be escaped with \ in the regular expression pattern. The simplest way to do this, would be to write the pattern manually, instead of joining the keys to the dictionary:
s = Regex.Replace(s,
"(?:<Message>).*?(&|<|>|\$).*?(?:</Message>)",
Function(match) characterMappings(match.Groups(1).Value)
)
Also, I highly recommend Expresso for working with regular expressions.
Your XML is invalid and hence it is not XML. Either fix code that generates XML (correct approach) or pretend this is text file and enjoy all problems with parsing non-structured text.
As you've stated in the question <Message>O&N</Message> is not valid XML. Most likely reason of such "XML" is using string concatenation to construct it instead of using proper XML manipulation methods. Unless you use some arcane language all practically used languages have built in or library support for XML creation so it should not be to hard to create XML right.

Categories