I'm trying to create a XML structure (not read/write from/to a file, only the logical part). I'm really confused about how to go about it and what do use. After looking on the Internet I've seen two candidate classes: XmlDocument and XDocument (with their corresponding references to nodes, elements, attributes etc.).
One's got to be deprecated but I've checked fairly recent posts (2010 and even 2011) and they are still suggesting code using both variants. Also, the Google check for XDocument only gives a fraction of the hits that XmlDocument brings.
When that part is sorted out, I've seen a number of different examples on how to compose a very simple XML structure. I guess the approach depends on the competence level of the person who suggests it, so I'm asking straight out - what classes and what syntax to use?
I had the very same issue a few weeks ago. Apparently, you can't trust numbers - XmlDocument gets roughly 20 times the count hit as XDocument but the former is deprecated and the latter is to be used.
As for the explanation to this phenomenon, I'm assuming that a lot of code was created using the deprecated class (and by that period's standards it was a huge convenience to use them). Years later, people are using "what's working" without realizing that there's a "new kid on the block".
And here's a simple example on how to create a logical XML structure.
XDocument xml = new XDocument(
new XElement("main",
new XElement("first", "---"),
new XElement("second",
new XElement("first sub", "---"),
new XElement("second sub", "---"))
));
More on the subject can be read here.
There are 2 ways of building XML documents: using a class holding the entire document in-memory or a one-way XML writer.
XDocument is the recommended way of building XML documents that could fit into the memory. It is not suited for very large XML files. XmlDocument is not yet marked as deprecated but I suspect it might become in future versions of the framework.
XmlWriter on the other hand is suited for building large XML documents that cannot fit into memory. Contrary to the XDocument this is a one-way writer and once a node is written you can no longer go back and modify it.
There's also a third way to build XML documents which falls into the first category and which is through XML serialization of .NET types. There are a couple of classes allowing you to achieve that such as DataContractSerializer and XmlSerializer.
There are differences in performance!
Read Performance: LINQ to XML vs XmlDocument vs XmlReader
Linq to XML is more faster than XmlDocument
Related
I need to read large xml using .net files which can easily be several GB of size.
I tried to use XDocument, but it just throws an System.OutOfMemoryException when I try to load the document.
What is the most performant way to read XML files of large size?
You basically have to use the "pull" model here - XmlReader and friends. That will allow you to stream the document rather than loading it all into memory in one go.
Note that if you know that you're at the start of a "small enough" element, you can create an XElement from an XmlReader, deal with that using the glory of LINQ to XML, and then move onto the next element.
The following page makes an interesting read, providing a means to mine data from XML file without loading it in memory. It allows you to combine the speed of XmlReader with the flexibility of Linq:
http://msdn.microsoft.com/en-us/library/bb387035.aspx
And quite an interesting article based on this technique:
http://blogs.msdn.com/b/xmlteam/archive/2007/03/24/streaming-with-linq-to-xml-part-2.aspx
You could try using an XmlTextReader instance.
http://msdn.microsoft.com/en-us/library/system.xml.xmltextreader.aspx
I've not done much with linq to xml, but all the examples I've seen load the entire XML document into memory.
What if the XML file is, say, 8GB, and you really don't have the option?
My first thought is to use the XElement.Load Method (TextReader) in combination with an instance of the FileStream Class.
QUESTION: will this work, and is this the right way to approach the problem of searching a very large XML file?
Note: high performance isn't required.. i'm trying to get linq to xml to basically do the work of the program i could write that loops through every line of my big file and gathers up, but since linq is "loop centric" I'd expect this to be possible....
Using XElement.Load will load the whole file into the memory. Instead, use XmlReader with the XNode.ReadFrom function, where you can selectively load notes found by XmlReader with XElement for further processing, if you need to. MSDN has a very good example doing just that: http://msdn.microsoft.com/en-us/library/system.xml.linq.xnode.readfrom.aspx
If you just need to search the xml document, XmlReader alone will suffice and will not load the whole document into the memory.
Gabriel,
Dude, this isn't exactly answering your ACTUAL question (How to read big xml docs using linq) but you might want to checkout my old question What's the best way to parse big XML documents in C-Sharp. The last "answer" (timewise) was a "note to self" on what ACTUALLY WORKED. It turns out that a hybrid document-XmlReader & doclet-XmlSerializer is fast (enough) AND flexible.
BUT note that I was dealing with docs upto only 150MB. If you REALLY have to handle docs as big as 8GB? then I guess you're likely to encounter all sorts of problems; including issues with the O/S's LARGE_FILE (>2GB) handling... in which case I strongly suggest you keep things as-primitive-as-possible... and XmlReader is as primitive as possible (and THE fastest according to my testing) XML-parser available in the Microsoft namespace.
Also: I've just noticed a belated comment in my old thread suggesting that I check out VTD-XML... I had a quick look at it just now... It "looks promising", even if the author seems to have contracted a terminal case of FIGJAM. He claims it'll handle docs of upto 256GB; to which I reply "Yeah, have you TESTED it? In WHAT environment?" It sounds like it should work though... I've used this same technique to implement "hyperlinks" in a textual help-system; back before HTML.
Anyway good luck with this, and your overall project. Cheers. Keith.
I realize that this answer might be considered non-responsive and possibly annoying, but I would say that if you have an XML file which is 8GB, then at least some of what you are trying to do in XML should be done by the file system or database.
If you have huge chunks of text in that file, you could store them as individual files and store the metadata and the filenames separately. If you don't, you must have many levels of structured data, probably with a lot of repetition of the structures. If you can decide what is considered an individual 'record' which can be stored as a smaller XML file or in a column of a database, then you can structure your database based on the levels of nesting above that. XML is great for small and dirty, it's also good for quite unstructured data since it is self-structuring. But if you have 8GB of data which you are going to do something meaningful with, you must (usually) be able to count on some predictable structure somewhere in it.
Storing XML (or JSON) in a database, and querying and searching both for XML records, and within the XML is well supported nowadays both by SQL stuff and by the NoSQL paradigm.
Of course you might not have the choice of not using XML files this big, or you might have some situation where they are really the best solution. But for some people reading this it could be helpful to look at this alternative.
I am writing code that parses XML.
I would like to know what is faster to parse: elements or attributes.
This will have a direct effect over my XML design.
Please target the answers to C# and the differences between LINQ and XmlReader.
Thanks.
Design your XML schema so that representation of the information actually makes sense. Usually, the decision between making something in attribute or an element will not affect performance.
Performance problems with XML are in most cases related to large amounts of data that are represented in a very verbose XML dialect. A typical countermeasures is to zip the XML data when storing or transmitting them over the wire.
If that is not sufficient then switching to another format such as JSON, ASN.1 or a custom binary format might be the way to go.
Addressing the second part of your question: The main difference between the XDocument (LINQ) and the XmlReader class is that the XDocument class builds a full document object model (DOM) in memory, which might be an expensive operation, whereas the XmlReader class gives you a tokenized stream on the input document.
With XML, speed is dependent on a lot of factors.
With regards to attributes or elements, pick the one that more closely matches the data. As a guideline, we use attributes for, well, attributes of an object; and elements for contained sub objects.
Depending on the amount of data you are talking about using attributes can save you a bit on the size of your xml streams. For example, <person id="123" /> is smaller than <person><id>123</id></person> This doesn't really impact the parsing, but will impact the speed of sending the data across a network wire or loading it from disk... If we are talking about thousands of such records then it may make a difference to your application.
Of course, if that actually does make a difference then using JSON or some binary representation is probably a better way to go.
The first question you need to ask is whether XML is even required. If it doesn't need to be human readable then binary is probably better. Heck, a CSV or even a fixed-width file might be better.
With regards to LINQ vs XmlReader, this is going to boil down to what you do with the data as you are parsing it. Do you need to instantiate a bunch of objects and handle them that way or do you just need to read the stream as it comes in? You might even find that just doing basic string manipulation on the data might be the easiest/best way to go.
Point is, you will probably need to examine the strengths of each approach beyond just "what parses faster".
Without having any hard numbers to prove it, I know that the WCF team at Microsoft chose to make the DataContractSerializer their standard for WCF. It's limited in that it doesn't support XML attributes, but it is indeed up to 10-15% faster than the XmlSerializer.
From that information, I would assume that using XML attributes will be slower to parse than if you use only XML elements.
I know this is a vague open ended question. I'm hoping to get some general direction.
I need to add cXML punchout to an ASP.NET C# site / application. This is replacing something that I wrote years ago in ColdFusion.
I'm a reasonably experienced C# developer but I haven't done much with XML. There seems to be lots of different options for processing XML in .NET.
Here's the open ended question: Assuming that I have an XML document in some form, eg a file or a string, what is the best way to read it into my code? I want to get the data and then query databases etc. The cXML document size and our traffic volumes are easily small enough so that loading the a cXML document into memory is not a problem.
Should I:
1) Manually build classes based on the dtd and use the XML Serializer?
2) Use a tool to generate classes. There are sample cXML files downloadable from Ariba.com.
I tried xsd.exe to generate an xsd and then xsd.exe /c to generate classes. When I try to deserialize I get errors because there seems to be "confusion" around whether some elements should be single values or arrays.
I tried the CodeXS online tool but that gives errors in it's log and errors if I try to deserialize a sample document.
2) Create a dataset and ReadXml()?
3) Create a typed dataset and ReadXml()?
4) Use Linq to XML. I often use Linq to Objects so I'm familiar with Linq in general but I'm struggling to see what it gives me in this situation.
5) Some other means.
I guess I need to improve my understanding of XML in general but even so ... am I missing some obvious way of doing this? In the old ColdFusion site I found a free component ("tag") which basically ignored any schema and read the XML into a "structure" which is essentially a series of nested hash tables which was then easy to read in code. That was probably quite sloppy but it worked.
I also need to generate XML files from my C# objects. Maybe Linq to XML will be good for that. I could start with a default "template" document and manipulate it before saving.
Thanks for any pointers ...
If you need to generate arbitrary XML in an exact format, you should generate it manually using LINQ-to-XML.
Background
We have a project that was started in .NET 1.1, moved to .NET 2.0, and recently moved again to .NET 3.5. The project is extremely data-driven and utilizes XML for many of its data files. Some of these XML files are quite large and I would like to take the opportunity I currently have to improve the application's interaction with them. If possible, I want to avoid having to hold them entirely in memory at all times, but on the other hand, I want to make accessing their data fast.
The current setup uses XmlDocument and XPathDocument (depending on when it was written and by whom). The data is looked up when first requested and cached in an internal data structure (rather than as XML, which would take up more memory in most scenarios). In the past, this was a nice model as it had fast access times and low memory footprint (or at least, satisfactory memory footprint). Now, however, there is a feature that queries a large proportion of the information in one go, rather than the nicely spread out requests we previously had. This causes the XML loading, validation, and parsing to be a visible bottleneck in performance.
Question
Given a large XML file, what is the most efficient and responsive way to query its contents (such as, "does element A with id=B exist?") repeatedly without having the XML in memory?
Note that the data itself can be in memory, just not in its more bloated XML form if we can help it. In the worst case, we could accept a single file being loaded into memory to be parsed and then unloaded again to free resources, but I'd like to avoid that if at all possible.
Considering that we're already caching data where we can, this question could also be read as "which is faster and uses less memory; XmlDocument, XPathDocument, parsing based on XmlReader, or XDocument/LINQ-to-XML?"
Edit: Even simpler, can we randomly access the XML on disk without reading in the entire file at once?
Example
An XML file has some records:
<MyXml>
<Record id='1'/>
<Record id='2'/>
<Record id='3'/>
</MyXml>
Our user interface wants to know if a record exists with an id of 3. We want to find out without having to parse and load every record in the file, if we can. So, if it is in our cache, there's no XML interaction, if it isn't, we can just load that record into the cache and respond to the request.
Goal
To have a scalable, fast way of querying and caching XML data files so that our user interface is responsive without resorting to multiple threads or the long-term retention of entire XML files in memory.
I realize that there may well be a blog or MSDN article on this somewhere and I will be continuing to Google after I've posted this question, but if anyone has some data that might help, or some examples of when one approach is better or faster than another, that would be great.
Update
The XMLTeam published a blog today that gives great advice on when to use the various XML APIs in .NET. It looks like something based on XmlReader and IEnumerable would be my best option for the scenario I gave here.
With XML I only know of two ways
XMLReader -> stream the large XML data in
or use the XML DOM object model and read the entire XML in at once into memory.
If the XML is big, we have XML files in 80 MB range and up, reading the XML into memory is a performance hit. There is no real way to "merge" the two ways of dealing with XML documents. Sorry.
I ran across this white paper a while ago when I was trying to stream XML: API-based XML streaming with FLWOR power and functional updates The paper tries to work with in memory XML but leverage LINQ accessing.
Maybe someone will find it interesting.
This might sound stupid.
But, if you have simple things to query, you can use regex over xml files. (the way they do grep in unix/linux).
I apologize if it doesn't make any sense.
The first part of your question sounds like a schema validation would work best. If you have access to the XSD's or can create them you could use an algorithm similar to this:
public void ValidateXmlToXsd(string xsdFilePath, string xmlFilePath)
{
XmlSchema schema = ValidateXsd(xsdFilePath);
XmlDocument xmlData = new XmlDocument();
XmlReaderSettings validationSettings = new XmlReaderSettings();
validationSettings.Schemas.Add(schema);
validationSettings.Schemas.Compile();
validationSettings.ValidationFlags = XmlSchemaValidationFlags.ProcessInlineSchema;
validationSettings.ValidationType = ValidationType.Schema;
validationSettings.ValidationEventHandler += new ValidationEventHandler(ValidationHandler);
XmlReader xmlFile = XmlReader.Create(xmlFilePath, validationSettings);
xmlData.Load(xmlFile);
xmlFile.Close();
}
private XmlSchema ValidateXsd(string xsdFilePath)
{
StreamReader schemaFile = new StreamReader(xsdFilePath);
XmlSchema schema = XmlSchema.Read(schemaFile, new ValidationEventHandler(ValidationHandler));
schema.Compile(new ValidationEventHandler(ValidationHandler));
schemaFile.Close();
schemaFile.Dispose();
return schema;
}
private void ValidationHandler(object sender, ValidationEventArgs e)
{
throw new XmlSchemaException(e.Message);
}
If the xml fails to validate the XmlSchemaException is thrown.
As for LINQ, I personally prefer to use XDocument whenever I can over XmlDocument. Your goal is somewhat subjective and without seeing exactly what you're doing I can't say go this way or go that way with any certainty that it would help you. You can use XPath with XDocument. I would have to say that you should use whichever suits your needs best. There's no issue with using XPath sometimes and LINQ other times. It really depends on your comfort level along with scalability and readability. What will benefit the team, so to speak.
An XmlReader will use less memory than an XmlDocument because it doesn't need to load the entire XML into memory at one time.
Just a thought on the comments of JMarsch. Even if the XML generation your process is not up for discussion, have you considered a DB (or a subset of XML files acting as indexes) as an intermediary? This would obviously only be of benefit if the XML files aren't updated more that once or twice a day. I guess this would need to be weighed up against your existing caching mechanism.
I can't speak to speed, butt I prefer XDocument/LINQ because of the syntax.
Rich