I am invoking a service that returns responses as xml format. The response doesnt follow the xml guidelines and contains some new lines and "\".
Due to the formatting issues, the deserialization is failing.
XML Format:
\r\n\r\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\r\n<N><details><date>25042014</date><orderNumber>OrderNumber </orderNumber><Response>1</Response></details>
I worked around the problem by removing the new lines and "\" before deserialization but was searching for a cleaner solution if exists.
The XML file has to be well defined, so it must be corresponding to an XSD structure. The escape sequences and new lines will destroy the valid xml, and thus will not correspond to the XSD structure, which, in turn, will cause the deserialization to fail. As far as I know, there is no way around it, except to read the file beforehand, remove the unwanted characters and sequences, and saving it again, so that it may be successfully deserialized when read by an XmlDocument.
Related
Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);
Im converting mass files to XML and each file is either XML, JSON, CSV or PSV. To do the conversion I need to know what data type the file is without looking at the file extension (Some are coming from API's). Someone suggested that I try parse each file by each of the types until you get a success but that is pretty inefficient and CSV cant be easily parsed as it is essentially just a text file (Same as PSV).
Does anyone have any ideas on what I can do? Thanks.
You can have some kind of "pre-parsing":
Either it starts with an XML declaration, or directly with the root node, first character of an XML file should be <.
First character of a JSON file can only be { if the JSON is built on an object, or [ if the JSON is built on an array.
For CSV and PSV (I guess PSV stands for Point-Separated Values?), each line of the file represent a specific record.
So by checking first character, you may find XML and/or JSON parsing is pointless.
Parsing the first line of the file should be enough to decide if the file format is CSV or PSV.
I found a lot of great questions on this subject. Unfortunately the answers all say to use an xsd file. I made an xsd file from an xml file using xsd.exe. I copied code from here and pasted into Visual studio and I got an error on the first line.
Not wanted to spend time figuring out why it would not run I decided to code the validation myself.
Here are the two points I am using:
Each left caret will have a right caret, so at the end of the file their will be an equal amount of left and right carets.
At the end of the file, if I either take the amount of left carets, or the amount of right carets subtract one from the total (Because the header does not have a backslash) and divide the total by two, I get the number of slashes.
I am encountering some problems though.
I am using string.count() This method also counts carets that are in the attributes (Which I do not want).
I compute the expected number of backslashes when I am done reading the file. If the numbers do not match I write "The expected number of slashes do not match" But I don't know where it is in the file.
I can't think of a way to fix these problems at the moment.
Does anyone have a better way to validate an xml file without using an xsd file?
Take note that WELL FORMED XML is different to VALID XML.
XML that adheres to the XML standard is considered well formed, while XML that adheres to a DTD is considered VALID.
If you just want to check if XML is well formed try this:
try
{
var file = "Your xml path";
var settings = new XmlReaderSettings { DtdProcessing = DtdProcessing.Ignore, XmlResolver = null };
using (var reader = XmlReader.Create(new StreamReader(file), settings))
{
var document = new XmlDocument();
document.Load(reader);
}
}
catch (Exception exc)
{
//show the exception here
}
P.S: Well formedness of an XML is always a prerequisite of a valid XML.
Hope it helps!
When you say "caret", I think you must be talking about the "<" and ">" symbols, which in the XML world are usually referred to as "angle brackets".
When you talk about checking whether the carets match up, you are therefore talking about whether the file conforms to XML syntax. That's called "well-formedness checking" in the XML world. Validation is something different and deeper. You need a schema (either an XSD schema or some other kind) for validation, but for well-formedness checking all you need is an XML parser.
Don't try to implement the well-formedness checking yourself. (a) because it's not easy, (b) because parsers are readily available off-the-shelf, and (c) because you clearly don't have a very advanced understanding of the problem. Just run your file through an XML parser and it will do the job for you.
I read XML files that sometimes contain elements like
<stringValue>text
text</stringValue>
XmlReader returns
text\ntext
for such strings.
So, when I rewrite the source XML later using XmlWriter I don't get the same strings (there is no
in them).
Should I worry about all this or it's fine to allow string to be changed this way?
I would worry about it yes because your manipulating the data. This means if you do a round-trip to the XML document the text formatting wouldn't be the same.
You would need to make sure on saving back out to XML persist the same formatting.
is the xml encoding for a new line character (\n). If your XML data has a new line in the text, then this notation is correct and the output from XMLWriter is correct. If the new line was not in the original XML data, I've been seeing an issue with IE10/IE11 using the XMLHttpRequest object inserting \r\n in the XML data.
My C# application loads XML documents using the following code:
XmlDocument doc = new XmlDocument();
doc.Load(path);
Some of these documents contain encoded characters, for example:
<xsl:text>
</xsl:text>
I notice that when these documents are loaded,
gets dropped.
My question: How can I preserve <xsl:text>
</xsl:text>?
FYI - The XML declaration used for these documents:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
Are you sure the character is dropped? character 10 is just a line feed- it wouldn't exactly show up in your debugger window. It could also be treated as whitespace. Have you tried playing with the whitespace settings on your xmldocument?
If you need to preserve the encoding you only have two choices: a CDATA section or reading as plain text rather than Xml. I suspect you have absolutely 0 control over the documents that come into the system, therefore eliminating the CDATA option.
Plain-text rather than Xml is probably distasteful as well, but it's all you have left. If you need to do validation or other processing you could first load and verify the xml, and then concatenate your files using simple file streams as a separate step. Again: not ideal, but it's all that's left.
is a linefeed - i.e. whitespace. The XML parser will load it in as a linefeed, and thereafter ignore the fact that it was originally encoded. The encoding is just part of the serialization of the data to text format - it's not part of the data itself.
Now, XML sometimes ignores whitespace and sometimes doesn't, depending on context, API etc. As Joel says you may find that it's not missing at all - or you may find that using it with an API which allows you to preserve whitespace fixes the problem. I wouldn't be at all surprised to see it turned into an unencoded linefeed character when you output the data though.
maybe it would be better to keep data in ![CDATA] ?
http://www.w3schools.com/XML/xml_cdata.asp