I'm providing MySql compatibility for my program that previously worked only with SQL Server. I used SqlBulkCopy and I would like to use it with MySql as well. I know there is MySqlBulkLoader that can be used to perform the same task. The difference however is that SqlBulkCopy worked with a DataTable so I prepared my DataTable and then performed the copy. MySqlBulkLoader, as far as I know, is used to copy an entire file into the database. But I am not dealing with a file here and I would prefer to skip extra steps of converting my DataTable into a temp file, performing the BulkCopy and then deleting the temp file.
Is there a way to make MySqlBulkLoader work with DataTables? Is there a trustworthy alternative to MySqlBulkLoader?
I assume that you're using the MySql Connector/NET, but which version of it?
Assuming that you're using the latest version (8.0 at current time), a look at the MySQL Connector/NET 8.0 API Reference shows that there is no other option than importing your data from an existing file.
Seems like your proposed method is the only workaround for that...
Related
I have almost the exact same issue as the scenario (linked) below, but unfortunately i'm unable to recreate the solutions succesfully.
I have a c# application using SQL Bulk Import with a datareader and writetoserver, where it's the SQLDatReader or an OracleDataReader, and i need to add columns to the result set.
I can not do it on the source sql statement.
I can not load a data table first and modify it (as it's 100's of gb's of data, almost a terabyte).
How to add columns to DataReader
can anyone provide a working example an help "push" me over this problem?
I temporarily found a solution of using SQL Server Integration Services (SSIS), but what i found while watching it run is it downloads all the data to a dts_buffer, than does the column modifications and then pumps the data into sql server, try doing that with a couple 100gb of data and it is not a good performing thing, if you can even get your infrastructure to build you a 24 core VM with 128gb of memory).
I finally have a small working example, the codeproject (jdweng) was helpful.
I will pose followup, i've tested with sql server (sqldatareader), need to do a test with oracle data reader.
One of the cases i was trying was converting a oracle unique id (stored as a string) to sql server as a uniqueidentifier. I want convert that on the fly, there is no way to adjust the source oracle statement (ADyson) to return a compatible datatype to sql server. Altering a 1tb table afterwards from varchar(40) to uniqueidentifier is painful, but if i could just change as part of the bulk insert, it'd be quick.
and i think now i will be able to.
I currently have a database connected to ODBC using the DBISAM 4 ODBC Driver.
I need a way to convert this database into an .MDB access database file using code.
I suggest doing it in 2 steps:
Conversion of database schema. In this step create SQL file with CREATE TABLE commands with info from your database source. Some data types may be different in your source and it may be hard to convert it to MS Access. Try to run such SQL commands on MS Access and correct errors until your schema looks identical (the same names of tables and columns, identical or very similar data types).
Copy data. Now you have identical or very similar schema on both sides. Now export source data to destination table. There are many ways of doing it. I prefer Jython with JDBC drivers, PreparedStatement with INSERT and code that looks like:
insert_stmt.setObject(i, rs_in.getObject(i))
This will work with ODBC while in JDK 1.7 and earlier there is JDBC-ODBC bridge (it disappeared in JDK 1.8). I think that in .NET environment it is very similar.
I have an application that reads from an SQL Server CE 4.0 database file.
The user has the option on startup to choose a database. Each database has the same schema, but different data.
Given that I want to ensure that they dont use an invalid database (or point the app at a word file or something), Is it possible to validate the schema of a selected database?
In the past I have used ADO.net to check that each column in each table exists but this seems dreadfully silly when entity framework is there. surely there must be something in EF that performs this, but I cant find it.
I am looking for an answer more sophsticated than "Run a query and if it fails then the database is invalid" as there could be many other reasons why such a query would fail
There is no functionality in EF to do this.
You can use my SQL CE Scripting API, available via NuGet http://www.nuget.org/packages/ErikEJ.SqlCeScripting/
First use
DataSet GetSchemaDataSet(GetAllTableNames());
And save it and add to your app.
Then use
DataSet GetSchemaDataSet(GetAllTableNames());
on the loaded database
Then compare the two DataSets
DataSet dsDifferences = new Dataset();
dsOriginal.Merge(dsChanged);
dsDifferences = dsOriginal.GetChanges();
(If dsDifferences has tables with rows, then there were differences)
My library also has a method DetermineVersion(string fileName) to check if the file appears to be a valid SQLCE file.
I've got a bunch of SQL dump files that I'd like to import into a dataset with C#. The machine this code will run on does not have SQL installed. Is there any way to do this short of parsing the text manually?
Thanks!
EDIT: The Dump file is just a long list of SQL statements specifying the schema and values of the database.
Is this a SQL dump of the backup sort that you create via a DUMP or BACKUP statement? If so the answer is pretty much no. It's essentially a snapshot of the physical structure of the database. You need to restore/load the dump/backup to an operable database before you can do anything with it.
I am pretty sure there is no way to turn a bunch of arbitrary SQL statements into an ADO.net DataSet without running the SQL commands through a database engine that understands the SQL in the dump file. You can't even do it between databases, e.g. a MySQL dump file will not run on an MS SQL server
You don't have to go manual, once you know the format. Creating a DataSet can be done 2 ways:
Set up code to loop the file and create a dataset directly
Set up code to loop the file and create an XML document in dataset format
Either will work. The best depends on your familiarity with the above (HINT: If you have no familiarity with either, choose the dataset).
It is all a question of format. What type of database is the DUMP file from? MySQL? Postgress? other?
Can you tell me what MySqlBulkLoader is for, where and how to use it?
Some examples would also be appreciated, please..
MySQLBulkLoader is a class in the MySQL Connector/Net class that wraps the MySQL statement LOAD DATA INFILE. This gives MySQL Connector/Net the ability to load a data file from a local or remote host to the server. [MySQLBulkLoader]
The example how to use the MySQLBulkLoader is also presented Here
To be clear:
The MySQLBulkLoader is not similar to SQLBulkCopy. SQLBulkCopy also called Bulk insert reads data from DataTable and MySQLBulkLoader also called LOAD DATA INFILE reads from a file. If you have a list of data to insert in you database, it is possible to prepare and insert data inside you database directly with SQLBulkCopy; where with the MySQLBulkoader you will need to genereate a file from your data before running the command.
There are no counterpart of SQLBulkCopy inside MySQL Connector/Net at the time writting; however, the MySQL DB support Bulk insert, so you can run the corresponding command in a MySQLCommand like presented Here.
MySqlBulkLoader is a class provided by the MySql .net Connector.
It provides an interface to MySql that is similar in concept to the SqlBulkCopy class / BCP for Sql Server. Basically, it allows you to load data into MySql in bulk. A decent looking example can be found at dragthor.wordpress.com and there's also an example in the MySql documentation.