I have a set of very large XML data with XSD. One xml might be up to 300MB.
I need to move data from XML into SQL Server.
I found that Microsoft has serialization library to map xml into objects
http://msdn.microsoft.com/en-us/library/182eeyhh.aspx
The problem I am worrying about is, when it maps the xml into object, will it load all the data into memory? If it does, it seems I cannot use it.
So is XmlTextReader the best way for my case like read line by line and store data into database.
Yes, in .NET, XML serialization reads everything into memory at one time.
A more memory-efficient approach is to use a System.Xml.XmlReader to read the content line-by-line.
Related
I have a large xml file which contains database!
400mb is a size.
it was created using LINQ itself and it was done in 10 minutes! Great result!
But in order to read a particle information from that xml file using LINQ it need 20 minutes and more!
Just imagine to read a small amount of information needs more time then to write a large information!
During read process it needs to call a function XDocument.Load(#"C:\400mb.xml") which is not IDisposable.
So when it will load whole xml document and when it gets my small information, Memory does not clears!
My target is to read "
XDocument XD1 = XDocument.Load(#"C:\400mb.xml");
string s1 = XD1.Root.Attribute("AnyAttribute").Value;
As you can see, I need to get an Attribute of the Root Element.
This means that in xml file the data I need might be on a first line and query must be done very quickly!
But instead of this it load whole Document and then returns that information!
So the question is How to read that small amount of information from a large xml file using anything?
Will System.Threading.Tasks namespace be useful? Or create asynchronous operations?
Or is even any kind of technique which will work on that xml file like a binary file?
I don't know! Help me Please!
Xdocument.Load is not the best approach, reason being Xdocument.Load loads the whole file into memory. According to MSDN memory usage will be proportional to the size of the file. You can use XMLReader (Check here) instead if you are just planning to search the XML doc. Read this documentation on MSDN.
I've been tasked with the job of importing a set of XML files, transform them and upload them to an SQL database, and then re-transforming them to a different XML-format.
The XML files are rather large, and some of them a little complex, so I'm unsure of the best way to do this. I'd of course like to automate this process somehow - and was actually hoping there'd be some kind of Entity Framework-esque solution to this.
I'm quite new to handling and dealing with XML in .NET, so I don't really know what my options are. I've read about XSLT, but that seems to me, to be a "language" I need to learn first, making it kind of not a solution for me.
Just to set a bit of context, the final solution actually needs to import new/updated versions of the XML on a weekly basis, uploading the new data to sql, and re-exporting as the other XML-format.
If anyone could give me any ideas as to how to proceed, I'd be much obliged.
My first instict was to use something like XSD2DB or XML SPY to first create the database structure, but I don't really see how I'm then supposed to proceed either.
I'm quite blank in fact :)
XSLT is language used by XML processors to transform XML document in one format to XML document in another format. XSLT would be your choice if you don't need to store data in database as well.
All tools like XSD2DB or XML SPY will create some database schema for you but the quality of the schema will be very dependent on quality of XML document and XSD (do you have XSD or are you going to generate it from sample XML?). The generated database will probably not be to much useful for EF.
If you have XSD you can use xsd.exe tool shipped with Visual studio and generate classes representing data of your XML files in .NET code. You will be able to use XmlSerializer to deserialize the XML document into your generated classes. The problem is that some XSD constructs like choice are modeled in .NET code by very ugly way. Another problem can be performance if your XML files are really huge because deserialization must read all data at once. The last problem can be again EF - classes generated by XSD will most probably not be usable as entities and you will not be able to map them.
So either use EF and in such case you will have to analyze XSD and create custom entities and mapping to your own designed database and you will fill your classes either from XmlReader (best performance), XmlDocument or XDocument or use some tool helping you creating classes or database from XML and in such case use direct SQL to work with a database.
Reverse operation will again require custom approach. You will have data represented either by your custom EF entities or by some autogenerated classes and you will have to transform them to a new format. You can again use xsd.exe to get classes for a new format and write a custom .NET code filling new classes from old ones (and use XmlSerializer to persist a new structure to XML) or you can use XmlWriter, XDocument or XmlDocument to build target XML document directly.
Data migration in any form is not easy task with ready to use solution. In case of really huge data processing you can use tools like SQL Server Integration Services where you will interact with XML and SQL directly and process data in batches.
Have a look at SQLXML 4.0. It does exactly what you want (in upload part).
I am trying to do a merge sort on sorted chunks of XML files on disks. No chance that they all fit in memory. My XML files consists of records.
Say I have n XML files. If I had enough memory I would read the entire contents of each file into a correspoding Queue, one queue for each file, compare the timestamp on each item in each queue and output the one with the smallest timestamp to another file (the merge file). This way, I merge all the little files into one big file with all the entries time-sorted.
The problem is that I don't have enough memory to read all XML with .ReadToEnd to later pass to .Parse method of an XDocument.
Is there a clean way to read just enough records to keep each of the Queues filled for the next pass that compares their XElement attribute "TimeStamp", remembering which XElement from disk it has read?
Thank you.
An XmlReader is what you are looking for.
Represents a reader that provides fast, non-cached, forward-only
access to XML data.
So it has fallen out of fashion, but this is exactly the problem solved with SAX. It is the Simple API for XML, and is based on callbacks. You launch a read operation, and your code gets called back for each record. This may be an optioin, as this does not require the program to load in the entire XML file (ala XMLDocument). Google SAX.
If you like the linq to xml api, this codeplex project may suite your needs.
I want to use the powerful DataContractSerializer to write or read data to the XML file.
But as my concept, DataContractSerializer can only read or write data with entire structure or list of structure.
My use case is describe below....I cannot figure out how to optimize the performance by using this API.
I have a structure named "Information" and have a List<Information> with unexpectable number of elements in this list.
User may update or add new element into this list very often.
Per operation (Add or Update), I must serialize all the element in the list to the same XML file.
So, I will write the same data even they are not modified into XML again. It does not make sense but I cannot find any approach to avoid this happened.
Due to the tombstoning mechanism, I must save all the information in 10 secs.
I'm afraid of the performance and maybe make UI lag...
Could I use any workaround to partially update or add a data information into the XML file by DataContractSerializer?
DataContractSerializer can be used to serialize selected items - what you need to do is to come up with scheme to identify changed data and way to efficiently serialize it. For example, one of the way could be
You start by serializing entire list of structures to an file.
Whenever some object is added/updated/removed from list, you create a diff object that will identify kind of change and the object changed. Then you can serialize this object to xml and append the xml to file.
While reading the file, you may have to apply similar logic, first read list and then start applying diffs one after another.
Because you want to continuous append to file, you shouldn't have root element in your file. In other words, the file with diff info will not be an valid xml document. It would contain series of xml fragments. To read it, you have to enclose these fragments in a xml declaration and root element.
You may use some background task to write the entire list periodically to generate valid xml file. At this point, you may discard your diff file. Idea is to mimic transactional system - one data structure to have serialized/saved info and then another structure containing changes (akin to transaction log).
If performance is a concern then using something other than DataContractSerializer.
There is a good comparison of the options at
http://blogs.claritycon.com/kevinmarshall/2010/11/03/wp7-serialization-comparison/
If the size of the list is a concern, you could try breaking it into smaller lists. THe most appropriate way to do this will depend on the data in your list and typical usage/edit/addition patterns.
Depending on the frequency with which the data is changed you could try saving it whenever it is changed. This would remove the need to save it in the time available for deactivation.
Just a quick question.
Other than the way you manipulate them, are XMLDocuments and DataSets basically the same thing? I'm just wondering for speed issues.
I have come across some code that calls dataSet.getXML() and then traverses the new XMLDocument.
I'm just curious what's the performance difference and which is the best one to use!
Thanks,
Adam
Very different.
A DataSet is a collection of related tabular records (with a strong focus on databases), including change tracking.
An XmlDocument is a tree structure of arbitrary data. You can convert between the two.
For "which is best".... what are you trying to do? Personally I very rarely (if ever) use DataSet / DataTable, but some people like them. I prefer an object (class) representation (perhaps via deserialization), but xml processing is fine in many cases.
It does, however, seem odd to write a DataSet to xml and then query the xml. In that scenario I would just access the original data directly.
No they are not. A DataSet does not store its internal data in XML and than a XMLDocument does not use a table/row structure to store XML Elements. You can convert from one to the other within severe limits but that's it. One of the biggest limitations is that a DataSet requires data to fit in a strict Table/Column format where a XmlDocument can have a wildly different structure from one XmlElement to the next. Moreover, the hierarchical structure of a XmlDocument usually doesn't map well to the tabular structure of a DataSet.
.NET provides XmlDataDocument as a way handle XML data in a tabular way. You have to remember though that XmlDataDocument is an XmlDocument first. The generated DataSet is just an alternative and limited way to look at the underlying XML data.
Depending on the size of your tables linq to xml or xquery might be faster to query your data than looking through the table. Im not positive on this, it is something you are going to have to test against your own data.