Loading XML Files - c#

I'm currently working with loading a lot (thousands of files from 1KB - 6MB) XML files, and loading them into destination databases. Currently, I'm using the SQLXMLBULKLOAD COM object. One of the biggest problems I'm having is that the COM object doesn't always play nice within our transactional environment. There's other problems too, such as performance; the process really begins choking on files approaching ~2MB, taking several minutes, if not longer in some cases (hours), to load into the tables.
So now I'm looking for an alternative, of which there seems to be two flavors:
1) Something like OPENXML, where XML is inserted as XML data into SQL Server
or
2) Solutions that parse the XML in memory, and load as rowsets into the database.
There's drawbacks to either approach, and I know I'm going to have to start doing some benchmarking of prototype solutions before I jump to any conclusions. The OPENXML approach is very attractive IMO, mainly because it promises some really good performance numbers (others claiming going from hours to milliseconds). But it has the drawback of storing data as XML -not ideal in my particular scenario since the destination tables already exist, and many other processes rely on queries and SPROCS out there that query these tables as normal rowset data.
Whatever solution I choose, I must meet the following requirements:
1) Must accept any XML file. Clients (in a business sense) need only provide an XSD, and an appropriate destination database/table(s) for the data.
2) Individual files (never larger than ~6MB) must be processed in under 1 minute (ideally even much quicker than that).
3) Inserted data must be able to accomodate existing queries and SPROCS (i.e, must ultimately end up as normal rowset data)
So my question is, do you have any experience in this situation, and what are your thoughts and insights?
I am not completely opposed to an OPENXML-like solution, just as long as the data can end up as normal rowset data at some point. I am also interested in 3rd party solutions you may have experience with, this is an important part of our process, and we are willing to spend some $ if it provides us the best and most stable solution.
I'm also not opposed to "roll-your-own" suggestions, or things on Codeplex, etc. I came across the LINQ to XSD project, but couldn't find much documentation about what its capabilitities are (just as an exame of things I am interested in)

I would revisit the performance issues you are having with the SQLXMLBULKLOAD COM. I have used this component to load 500MB xml files before. Can you post the code you are using to invoke the component?

Related

SQL Database VS. Multiple Flat Files (Thousands of small CSV's)

We are designing an update to a current system (C++\CLI and C#).
The system will gather small (~1Mb) amounts of data from ~10K devices (in the near future). Currently, they are used to save device data in a CSV (a table) and store all these in a wide folder structure.
Data is only inserted (create / append to a file, create folder) never updated / removed.
Data processing is done by reading many CSV's to an external program (like Matlab). Mainly be used for statistical analysis.
There is an option to start saving this data to an MS-SQL database.
Process time (reading the CSV's to external program) could be up to a few minutes.
How should we choose which method to use?
Does one of the methods take significantly more storage than the other?
Roughly, when does reading the raw data from a database becomes quicker than reading the CSV's? (10 files, 100 files? ...)
I'd appreciate your answers, Pros and Cons are welcome.
Thank you for your time.
Well if you are using data in one CSV to get data in another CSV I would guess that SQL Server is going to be faster than whatever you have come up with. I suspect SQL Server would be faster in most cases, but I can't say for sure. Microsoft has put a lot of resources into make a DBMS that does exactly what you are trying to do.
Based on your description it sounds like you have almost created your own DBMS based on table data and folder structure. I suspect that if you switched to using SQL Server you would probably find a number of areas where things are faster and easier.
Possible Pros:
Faster access
Easier to manage
Easier to expand should you need to
Easier to enforce data integrity
Easier to design more complex relationships
Possible Cons:
You would have to rewrite your existing code to use SQL Server instead of your current system
You may have to pay for SQL Server, you would have to check to see if you can use Express
Good luck!
I'd like to try hitting those questions a bit out of order.
Roughly, when does reading the raw data from a database becomes
quicker than reading the CSV's? (10 files, 100 files? ...)
Immediately. The database is optimized (assuming you've done your homework) to read data out at incredible rates.
Does one of the methods take significantly more storage than the
other?
Until you're up in the tens of thousands of files, it probably won't make too much of a difference. Space is cheap, right? However, once you get into the big leagues, you'll notice that the DB is taking up much, much less space.
How should we choose which method to use?
Great question. Everything in the database always comes back to scalability. If you had only a single CSV file to read, you'd be good to go. No DB required. Even dozens, no problem.
It looks like you could end up in a position where you scale up to levels where you'll definitely want the DB engine behind your data pretty quickly. When in doubt, creating a database is the safe bet, since you'll still be able to query that 100 GB worth of data in a second.
This is a question many of our customers have where I work. Unless you need flat files for an existing infrastructure, or you just don't think you can figure out SQL Server, or if you will only have a few files with small amounts of data to manage, you will be better off with SQL Server.
If you have the option to use a ms-sql database, I would do that.
Maintaining data in a wide folder structure is never a good idea. Reading your data would involve reading several files. These could be stored anywhere on your disk. Your file-io time would be quite high. SQL server being a production database has these problems already taken care of.
You are reinventing the wheel here. This is how foxpro manages data, one file per table. It is usually a good idea to use proven technology unless you are actually making a database server.
I do not have any test statistics here, but reading several files will almost always be slower than a database if you are dealing with any significant amount of data. Given your about 10k devices, you should consider using a standard database.

Good data access methodology(ies) for planning tool?

I'm building/maintaining some planning tools, with the following characteristics:
-data is loaded (read-only) from MsAccess/SQLServer into C# framework 3.5. Data is loaded into SQLServer/MsAccess from the ERP system.
-Significant amounts of maintenance data are loaded (say in total 2000.000 records from various tables), all this data is needed simultaneously to do the planning.
Currently, I am using typed datatables that I fill using tableadapter. I then iterate over the rows in each table, creating custom objects that hold the same data. The rest of my code only works with those custom objects.
What are alternatives to this approach, and what are the pros/cons of the alternatives over this approach in terms of maintainability and loading speed (from SQL Server/MSAccess into memory)?
Main disadvantage of current approach is that I need to load entire tables, while in some cases I would be able to dynamically determine which records I would need to retrieve. But the current framework does not appear to give easy support for this.
your approach sounds very reasonable to me.
it's main advantage is that it's very simple.
The only reason I see to change it is if you indeed have performance problems.
In that case, I would suggest loading your data in chunks (say, 5000 rows at a time, something like that).
If you're using different servers for your app and for the database engine, you might benefit from loading the next batch in a separate thread while processing the current batch.
But, again- if it's working fine the way it is- than it's fine.
p.s. Like billinkc, I'm curious about msAccess- can it really perform well with those volumes of data?
For the sake of performance and to avoid reinventing the wheel, I would strongly consider using an ETL library - for instance RhinoETL

Reading and Writing XML as relational data - best practices

I'm supposed to do the fallowing:
1) read a huge (700MB ~ 10 million elements) XML file;
2) parse it preserving order;
3) create a text(one or more) file with SQL insert statements to bulk load it on the DB;
4) write the relational tuples and write them back in XML.
I'm here to exchange some ideas about the best (== fast fast fast...) way to do this. I will use C# 4.0 and SQL Server 2008.
I believe that XmlTextReader its a good start. But I do not know if it can handle such a huge file. Does it load all file when is instantiated or holds just the actual reading line in memory? I suppose I can do a while(reader.Read()) and that should be fine.
What is the best way to write the text files? As I should preserve the ordering of the XML (adopting some numbering schema) I will have to hold some parts of the tree in memory to do the calculations etc... Should I iterate with stringbuilder?
I will have two scenarios: one where every node (element, attribute or text) will be in the same table (i.e., will be the same object) and another scenario where for each type of node (just this three types, no comments etc..) I will have a table in the DB and a class to represent this entity.
My last specific question is how good is the DataSet ds.WriteXml? Will it handle 10M tuples? Maybe its best to bring chunks from the database and use a XmlWriter... I really dont know.
I'm testing all this stuff... But I decided to post this question to listen you guys, hopping your expertise can help me doing this things more correctly and faster.
Thanks in advance,
Pedro Dusso
I'd use the SQLXML Bulk Load Component for this. You provide a specially annotated XSD schema for your XML with embedded mappings to your relational model. It can then bulk load the XML data blazingly fast.
If your XML has no schema you can create one from visual studio by loading the file and selecting Create Schema from the XML menu. You will need to add the mappings to your relational model yourself however. This blog has some posts on how to do that.
Guess what? You don't have a SQL Server problem. You have an XML problem!
Faced with your situation, I wouldn't hesitate. I'd use Perl and one of its many XML modules to parse the data, create simple tab- or other-delimited files to bulk load, and bcp the resulting files.
Using the server to parse your XML has many disadvantages:
Not fast, more than likely
Positively useless error messages, in my experience
No debugger
Nowhere to turn when one of the above turns out to be true
If you use Perl on the other hand, you have line-by-line processing and debugging, error messages intended to guide a programmer, and many alternatives should your first choice of package turn out not to do the job.
If you do this kind of work often and don't know Perl, learn it. It will repay you many times over.

How to store my data (C#.net)

I'm having a bit of a problem deciding how to store some data. To see it from a simple perspective, it will be a simple table of data but there will be many tables. There will be about 7 columns in each table, but again there will be a lot of tables (and they will be created at runtime, whenever the customer wants a clean grid)
The data has to be stored locally in a file (and there will not be multiple instances of the software running).
I'm using C# 4.0 and I have been looking at using XML files(one file per table, or storing multiple tables in a file), sqlite, sql server CE, access etc. I will be happy if someone here has some comments or suggestions on how to do/not to do. Stability and reliability(e.g. no trashed databases because of unstable third party software) is probably my biggest concern.
If you are looking to store the data locally in a file, I would recommend the sqlite option since it seems your data is created in the form of a database table already. Sqlite is already built to handle multiple tables and columns so it means less mental overhead for you, the developer.
http://web.archive.org/web/20100208133236/http://www.mikeduncan.com/sqlite-on-dotnet-in-3-mins/ is a decent tutorial to give a quick overview on how to set it up and get going.
As for what NOT to do: don't try to make your own scheme to save the data to a file, it's a well understood problem that has been solved many times over, why re-invent the wheel?
XML wont be a good choice if you are planning to make several queries, since loading text files may be painful when they grow (talking about files over 1mb). If you plan to mantain the data low, the xml would be good to keep it simple. I still won't use it, but if you have a background, then the benefits will be heavier than the learning curve.
If you have no expertise in any of them, and the data is light my suggestion is SQLite, I beleive is the best lightweight DB for .Net and the prvider is very good. you can find it easily on Google.
I would tell you that Access is not recommendable, but this is a personal oppinion. Many people use it and I think is for some reason. So you should check it out and try it.
Again, my final recommendation is SQLite, unless you know very well another one, in which case you'll have to think how much your data is going to grow. If you plan to have a DB around 100mb, any of them, except xml would do; If you think it'll grow bigger than that, consider SQLite heavily

Datasets and XML in place of proper db: Not a good idea?

In continuation of: Storing DataRelation in xml?
Thanks to everybody for answers to my earlier thread. However, could I ask the reason why everybody is not supporting this XML based approach? What exactly will be the problems? I can apply connstraints to dataset, and I can, I guess, also use transactions.
I am new to this. So if you could point me to some link, where I can find some sort of comparisons, that would be really helpful.
According to FAQ, discussions are not very encouraged, but I guess this is quite specific. I hope, not to be fired for this... :)
Thanks for reading,
Saurabh.
Database management systems are specifically designed to store data and retrieve it quickly, to preserve the integrity of the data and to leverage concurrent access to the data.
XML, on the other hand, was originally designed for documents, separating the content from the presentation. It became a handy way to store simple data because the file structure is so well defined, and then it went out of hand with people trying to store entire databases in an unsuited structure.
XML doesn't guarantee atomicity, concurrency, integrity, fast access or anything like that. Not inherently, anyway. .NET's DataSet libraries do help in that regard, but just because you can serialize DataSet objects to XML doesn't make it a good place to store data for multiple users.
When you're faced with two tools, one which was designed to do exactly what you need to do (in this case a DBMS) and one that was designed to do something else but has been kludged to do what you want, sorta (in this case XML), you should probably go with the first option.
Concurrency will be the main issue, where multiple users want to access the same "database" file. Performance is the other, because the whole file has to be loaded into memory. If the filesize grows it'll get unmanageable. Also, performance on queries is hit, because it won't be as efficient as getting something as tuned and honed as an RDBMS to do it for you.
Though I would need to research to back this up, I suspect that there are some performance implications to not using an actual database.
If there is a chance that another application built on a different platform (not ADO.NET) might need to access your data in the future, having to work with a giant XML file will very likely make life more difficult. A relational DB is the standard approach to this sort of problem.

Categories