Just a quick question.
Other than the way you manipulate them, are XMLDocuments and DataSets basically the same thing? I'm just wondering for speed issues.
I have come across some code that calls dataSet.getXML() and then traverses the new XMLDocument.
I'm just curious what's the performance difference and which is the best one to use!
Thanks,
Adam
Very different.
A DataSet is a collection of related tabular records (with a strong focus on databases), including change tracking.
An XmlDocument is a tree structure of arbitrary data. You can convert between the two.
For "which is best".... what are you trying to do? Personally I very rarely (if ever) use DataSet / DataTable, but some people like them. I prefer an object (class) representation (perhaps via deserialization), but xml processing is fine in many cases.
It does, however, seem odd to write a DataSet to xml and then query the xml. In that scenario I would just access the original data directly.
No they are not. A DataSet does not store its internal data in XML and than a XMLDocument does not use a table/row structure to store XML Elements. You can convert from one to the other within severe limits but that's it. One of the biggest limitations is that a DataSet requires data to fit in a strict Table/Column format where a XmlDocument can have a wildly different structure from one XmlElement to the next. Moreover, the hierarchical structure of a XmlDocument usually doesn't map well to the tabular structure of a DataSet.
.NET provides XmlDataDocument as a way handle XML data in a tabular way. You have to remember though that XmlDataDocument is an XmlDocument first. The generated DataSet is just an alternative and limited way to look at the underlying XML data.
Depending on the size of your tables linq to xml or xquery might be faster to query your data than looking through the table. Im not positive on this, it is something you are going to have to test against your own data.
Related
What is best way to store big objects? In my case it's something like tree or linked list.
I tried the following:
1) Relational db
Is not good for tree structures.
2) Document db
I tried RavenDB but it raised System.OutOfMemory exception when i call SaveChanges method
3) .Net Serialization
It's working very slow
4) Protobuf
It cannt to deserialize List<List<>> types and im not sure about linked structures.
So...?
You mention protobuf - I routinely use protobuf-net with objects that are many hundreds of megabytes in size, but: it does need to be suitably written as a DTO, and ideally as a tree (not a bidirectional graph, although that usage is supported in some scenarios).
In the case of a doubly-linked list, that might mean simply: marking the "previous" links as not serialized, then doing a fix-up in an after-deserialize callback, to correctly set the "previous" links. Pretty easy normally.
You are correct in that it doesn't currently support nested lists. This is usually trivial to side-step by using a list of something that has a lists but I'm tempted to make this implicit - i.e. the library should be able to simulate this without you needing to change your model. If you are interested in me doing this, let me know.
If you have a concrete example of a model you'd like to serialize, and want me to offer guidance, let me know - if you can't post it here, then my email is in my profile. Entirely up to you.
Did you tried Json.NET and store the result in a file?
Option [ 2 ] : NOSQL ( Document ) Database
I suggest Cassandra.
From the cassandra wiki,
Cassandra's public API is based on Thrift, which offers no streaming abilities
any value written or fetched has to fit in memory. This is inherent to Thrift's
design and is therefore unlikely to change. So adding large object support to
Cassandra would need a special API that manually split the large objects up
into pieces. A potential approach is described in http://issues.apache.org/jira/browse/CASSANDRA-265.
As a workaround in the meantime, you can manually split files into chunks of whatever
size you are comfortable with -- at least one person is using 64MB -- and making a file correspond
to a row, with the chunks as column values.
So if your files are < 10MB you should be fine, just make sure to limit the file size, or break large files up into chunks.
CouchDb does a very good job with challenges like that one.
storing a tree in CouchDb
storing a tree in relational databases
I've been tasked with the job of importing a set of XML files, transform them and upload them to an SQL database, and then re-transforming them to a different XML-format.
The XML files are rather large, and some of them a little complex, so I'm unsure of the best way to do this. I'd of course like to automate this process somehow - and was actually hoping there'd be some kind of Entity Framework-esque solution to this.
I'm quite new to handling and dealing with XML in .NET, so I don't really know what my options are. I've read about XSLT, but that seems to me, to be a "language" I need to learn first, making it kind of not a solution for me.
Just to set a bit of context, the final solution actually needs to import new/updated versions of the XML on a weekly basis, uploading the new data to sql, and re-exporting as the other XML-format.
If anyone could give me any ideas as to how to proceed, I'd be much obliged.
My first instict was to use something like XSD2DB or XML SPY to first create the database structure, but I don't really see how I'm then supposed to proceed either.
I'm quite blank in fact :)
XSLT is language used by XML processors to transform XML document in one format to XML document in another format. XSLT would be your choice if you don't need to store data in database as well.
All tools like XSD2DB or XML SPY will create some database schema for you but the quality of the schema will be very dependent on quality of XML document and XSD (do you have XSD or are you going to generate it from sample XML?). The generated database will probably not be to much useful for EF.
If you have XSD you can use xsd.exe tool shipped with Visual studio and generate classes representing data of your XML files in .NET code. You will be able to use XmlSerializer to deserialize the XML document into your generated classes. The problem is that some XSD constructs like choice are modeled in .NET code by very ugly way. Another problem can be performance if your XML files are really huge because deserialization must read all data at once. The last problem can be again EF - classes generated by XSD will most probably not be usable as entities and you will not be able to map them.
So either use EF and in such case you will have to analyze XSD and create custom entities and mapping to your own designed database and you will fill your classes either from XmlReader (best performance), XmlDocument or XDocument or use some tool helping you creating classes or database from XML and in such case use direct SQL to work with a database.
Reverse operation will again require custom approach. You will have data represented either by your custom EF entities or by some autogenerated classes and you will have to transform them to a new format. You can again use xsd.exe to get classes for a new format and write a custom .NET code filling new classes from old ones (and use XmlSerializer to persist a new structure to XML) or you can use XmlWriter, XDocument or XmlDocument to build target XML document directly.
Data migration in any form is not easy task with ready to use solution. In case of really huge data processing you can use tools like SQL Server Integration Services where you will interact with XML and SQL directly and process data in batches.
Have a look at SQLXML 4.0. It does exactly what you want (in upload part).
I have a set of very large XML data with XSD. One xml might be up to 300MB.
I need to move data from XML into SQL Server.
I found that Microsoft has serialization library to map xml into objects
http://msdn.microsoft.com/en-us/library/182eeyhh.aspx
The problem I am worrying about is, when it maps the xml into object, will it load all the data into memory? If it does, it seems I cannot use it.
So is XmlTextReader the best way for my case like read line by line and store data into database.
Yes, in .NET, XML serialization reads everything into memory at one time.
A more memory-efficient approach is to use a System.Xml.XmlReader to read the content line-by-line.
I'm supposed to do the fallowing:
1) read a huge (700MB ~ 10 million elements) XML file;
2) parse it preserving order;
3) create a text(one or more) file with SQL insert statements to bulk load it on the DB;
4) write the relational tuples and write them back in XML.
I'm here to exchange some ideas about the best (== fast fast fast...) way to do this. I will use C# 4.0 and SQL Server 2008.
I believe that XmlTextReader its a good start. But I do not know if it can handle such a huge file. Does it load all file when is instantiated or holds just the actual reading line in memory? I suppose I can do a while(reader.Read()) and that should be fine.
What is the best way to write the text files? As I should preserve the ordering of the XML (adopting some numbering schema) I will have to hold some parts of the tree in memory to do the calculations etc... Should I iterate with stringbuilder?
I will have two scenarios: one where every node (element, attribute or text) will be in the same table (i.e., will be the same object) and another scenario where for each type of node (just this three types, no comments etc..) I will have a table in the DB and a class to represent this entity.
My last specific question is how good is the DataSet ds.WriteXml? Will it handle 10M tuples? Maybe its best to bring chunks from the database and use a XmlWriter... I really dont know.
I'm testing all this stuff... But I decided to post this question to listen you guys, hopping your expertise can help me doing this things more correctly and faster.
Thanks in advance,
Pedro Dusso
I'd use the SQLXML Bulk Load Component for this. You provide a specially annotated XSD schema for your XML with embedded mappings to your relational model. It can then bulk load the XML data blazingly fast.
If your XML has no schema you can create one from visual studio by loading the file and selecting Create Schema from the XML menu. You will need to add the mappings to your relational model yourself however. This blog has some posts on how to do that.
Guess what? You don't have a SQL Server problem. You have an XML problem!
Faced with your situation, I wouldn't hesitate. I'd use Perl and one of its many XML modules to parse the data, create simple tab- or other-delimited files to bulk load, and bcp the resulting files.
Using the server to parse your XML has many disadvantages:
Not fast, more than likely
Positively useless error messages, in my experience
No debugger
Nowhere to turn when one of the above turns out to be true
If you use Perl on the other hand, you have line-by-line processing and debugging, error messages intended to guide a programmer, and many alternatives should your first choice of package turn out not to do the job.
If you do this kind of work often and don't know Perl, learn it. It will repay you many times over.
How can I write Join statements on dataset..
I have data in xml format..I can load that data into a dataset..
but how do I fetch data from two datatables using a join query?
Well, it partly depends on how you want to express that join. If you know the query beforehand, I would personally use LINQ to Objects via LINQ to DataSet - that's particularly handy if you're working with strongly typed datasets, but it can work even without that.
The sample code for C# in Depth has some examples in LINQ to DataSet you could have a look at.
Now, if you want to read the query dynamically as well, that makes it a lot harder.
Is this XML actually an XML-serialized dataset? Do you definitely need to get datasets involved at all? If it's just plain XML, have you tried using LINQ to XML with LINQ to Objects? It may be less efficient, but how vital is that for your application? How large is the data likely to be?