I am working on moving a database from MS Access to sql server. To move the data into the new tables I have decided to write a sync routine as the schema has changed quite significantly and it lets me run testing on programs that run off it and resync whenever I need new test data. Then eventually I will do one last sync and start live on the new sql server version.
Unfortunately I have hit a snag, my method is below for copying from Access to SQLServer
public static void BulkCopyAccessToSQLServer
(string sql, CommandType commandType, DBConnection sqlServerConnection,
string destinationTable, DBConnection accessConnection, int timeout)
{
using (DataTable dt = new DataTable())
using (OleDbConnection conn = new OleDbConnection(GetConnection(accessConnection)))
using (OleDbCommand cmd = new OleDbCommand(sql, conn))
using (OleDbDataAdapter adapter = new OleDbDataAdapter(cmd))
{
cmd.CommandType = commandType;
cmd.Connection.Open();
adapter.SelectCommand.CommandTimeout = timeout;
adapter.Fill(dt);
using (SqlConnection conn2 = new SqlConnection(GetConnection(sqlServerConnection)))
using (SqlBulkCopy copy = new SqlBulkCopy(conn2))
{
conn2.Open();
copy.DestinationTableName = destinationTable;
copy.BatchSize = 1000;
copy.BulkCopyTimeout = timeout;
copy.WriteToServer(dt);
copy.NotifyAfter = 1000;
}
}
}
Basically this queries access for the data using the input sql string this has all the correct field names so I don't need to set columnmappings.
This was working until I reached a table with a calculated field. SQLBulkCopy doesn't seem to know to skip the field and tries to update the column which fails with error "The column 'columnName' cannot be modified because it is either a computed column or is the result of a union operator."
Is there an easy way to make it skip the calculated field?
I am hoping not to have to specify a full column mapping.
There are two ways to dodge this:
use the ColumnMappings to formally define the column relationship (you note you don't want this)
push the data into a staging table - a basic table, not part of your core transactional tables, whose entire purpose is to look exactly like this data import; then use a TSQL command to transfer the data from the staging table to the real table
I always favor the second option, for various reasons:
I never have to mess with mappings - this is actually important to me ;p
the insert to the real table will be fully logged (SqlBulkCopy is not necessarily logged)
I have the fastest possible insert - no constraint checking, no indexing, etc
I don't tie up a transactional table during the import, and there is no risk of non-repeatable queries running against a partially imported table
I have a safe abort option if the import fails half way through, without having to use transactions (nothing has touched the transactional system at this point)
it allows some level of data-processing when pushing it into the real tables, without the need to either buffer everything in a DataTable at the app tier, or implement a custom IDataReader
Related
I'm copying some data from one SQL Server database to another SQL Server database.
That works fine, what I need is to check if some data already exists, then not copy it. How can I do that? Any suggestions?
string Source = ConfigurationManager.ConnectionStrings["Db1"].ConnectionString;
string Destination = ConfigurationManager.ConnectionStrings["Db2"].ConnectionString;
using (SqlConnection sourceCon = new SqlConnection(Source))
{
SqlCommand cmd = new SqlCommand("SELECT [Id],[Client] FROM [Db1].[dbo].[Client]", sourceCon);
sourceCon.Open();
using (SqlDataReader rdr = cmd.ExecuteReader())
{
using (SqlConnection destCon = new SqlConnection(Destination))
{
using (SqlBulkCopy bc = new SqlBulkCopy(destCon))
{
bc.DestinationTableName = "Clients";
bc.ColumnMappings.Add("Id", "ClientId");
bc.ColumnMappings.Add("Client", "Client");
destCon.Open();
bc.WriteToServer(rdr);
}
}
}
}
One way to do what you're after would be to bulk-copy into a staging table (a separate table with similar layout), and then perform a conditional insert from the staging table into the real table.
You could also do something similar using a table-valued-parameter instead of SqlBulkCopy, and treat the table-valued-parameter as the staging table.
Copy all tables from your source database to your destination database as temp tables, then run SQL to add the missing record from the temp table to the destination table. The final step to delete the temp tables.
hope that will work for you.
You could make a database link from source db to destination, and run a query to work out which rows would need to transit, but be careful not to drag too much data over the link as it could make the process slow- realistically you only need the. Columns you will use to determine whether a row in the source equals a row in the dest
Typically though it's easier to bulk copy all the data into a temporary table at the destination then use a merge or insert-leftjoin to only insert some data from temporary table to real table
Here's an example of how to insert only some rows that don't already exist:
INSERT INTO real(column1,column2...)
SELECT temp.column1,temp.column2... FROM
temp
LEFT JOIN real ON real.ID = temp.ID
WHERE
real.ID IS NULL
In c# terms it would look like:
new SqlCommand(#"INSERT INTO real(column1,column2...)
SELECT temp.column1,temp.column2... FROM
temp
LEFT JOIN real ON real.ID = temp.ID
WHERE
real.ID IS NULL", conn).ExecuteNonQuery();
You need to run this using a conn connected to your destination database
I am trying to create a temp table from the a select statement so that I can get the schema information from the temp table.
I am able to achieve this in SQL Server with the following code:
//This creates the temp table
SELECT location.id, location.name into #URM_TEMP_TABLE from location
//This retrieves column information from the temp table
SELECT * FROM tempdb.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME like '#U%'
If I run the code in c# like so:
using (CONN = new SqlConnection(Settings.Default.UltrapartnerDBConnectionString))
{
var commandText = ReportToDisplay.ReportQuery.ToLower().Replace("from", "into #URM_TEMP_TABLE from");
using (SqlCommand command = CONN.CreateCommand())
{
//Create temp table
CONN.Open();
command.CommandText = commandText;
int retVal = command.ExecuteNonQuery();
CONN.Close();
//get column data from temp table
command.CommandText = "SELECT * FROM TEMPDB.INFORMATION_SCHEMA.Columns WHERE TABLE_NAME like '#U%'";
CONN.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
ColumnsForReport.Add(new ListBoxCheckBoxItemModel
{
Name = reader["COLUMN_NAME"].ToString(),
DataType = reader["DATA_TYPE"].ToString(),
IsSelected = false,
RVMCommandModel = this
});
}
}
CONN.Close();
//drop table
command.CommandText = "DROP TABLE #URM_TEMP_TABLE";
CONN.Open();
command.ExecuteNonQuery();
CONN.Close();
}
}
Everything works until it gets to the drop statement: Cannot drop the table '#URM_TEMP_TABLE'
So ExecuteNonQuery returns back 2547 - which is the number of rows the temp table is supposed to have in it. However, it seems that the table does not actually get created using this. Is ExecuteNonQuery the right method to call?
temporary tables are only in scope for the current session, in the code you've posted you're opening a connection, creating a temp table, closing connection.
then opening another connection (new session) and attempting to drop a table which is not in scope of that session.
You would need to drop the temp table within the same connection, or possibly make it a global temp table (##) - though in this case with two separate connections, a global temp table would still fall out of scope.
Additionally, as it was pointed out in the comments your temp tables will be cleaned up automatically - but if you really did want to drop them, you must do so from the session that created them.
EDIT taken from another SO thread:
Global temporary tables in SQL Server
Global temporary tables operate much like local temporary tables; they
are created in tempdb and cause less locking and logging than
permanent tables. However, they are visible to all sessions, until the
creating session goes out of scope (and the global ##temp table is no
longer being referenced by other sessions). If two different sessions
try the above code, if the first is still active, the second will
receive the following:
Server: Msg 2714, Level 16, State 6, Line 1 There is already an object
named '##people' in the database.
I have yet to see a valid justification for the use of a global ##temp
table. If the data needs to persist to multiple users, then it makes
much more sense, at least to me, to use a permanent table. You can
make a global ##temp table slightly more permanent by creating it in
an autostart procedure, but I still fail to see how this is
advantageous over a permanent table. With a permanent table, you can
deny permissions; you cannot deny users from a global ##temp table.
Looks like global temp tables still go out of scope... they're just bad to use in general IMO. Can you just drop the table in the same session or rethink your solution?
I'm trying to send a DataTable to a stored procedure using c#, .net 2.0 and SQLServer 2012 Express.
This is roughly what I'm doing:
//define the DataTable
var accountIdTable = new DataTable("[dbo].[TypeAccountIdTable]");
//define the column
var dataColumn = new DataColumn {ColumnName = "[ID]", DataType = typeof (Guid)};
//add column to dataTable
accountIdTable.Columns.Add(dataColumn);
//feed it with the unique contact ids
foreach (var uniqueId in uniqueIds)
{
accountIdTable.Rows.Add(uniqueId);
}
using (var sqlCmd = new SqlCommand())
{
//define command details
sqlCmd.CommandType = CommandType.StoredProcedure;
sqlCmd.CommandText = "[dbo].[msp_Get_Many_Profiles]";
sqlCmd.Connection = dbConn; //an open database connection
//define parameter
var sqlParam = new SqlParameter();
sqlParam.ParameterName = "#tvp_account_id_list";
sqlParam.SqlDbType = SqlDbType.Structured;
sqlParam.Value = accountIdTable;
//add parameter to command
sqlCmd.Parameters.Add(sqlParam);
//execute procedure
rResult = sqlCmd.ExecuteReader();
//print results
while (rResult.Read())
{
PrintRowData(rResult);
}
}
But then I get the following error:
ArgumentOutOfRangeException: No mapping exists from SqlDbType Structured to a known DbType.
Parameter name: SqlDbType
Upon investigating further (in MSDN, SO and other places) it appears as if .net 2.0 does not support sending a DataTable to the database (missing things such as SqlParameter.TypeName), but I'm still not sure since I haven't seen anyone explicitly claiming that this feature is not available in .net 2.0
Is this true?
If so, is there another way to send a collection of data to the database?
Thanks in advance!
Out of the box, ADO.NET does not suport this with good reason. A DataTable could take just about any number of columns, which may or may not map up to a real table in your database.
If I'm understanding what you want to do - upload the contents of a DataTable quickly to a pre-defined, real table with the same structure, I'd suggest you investigate SQLBulkCopy.
From the documentation:
Microsoft SQL Server includes a popular command-prompt utility named
bcp for moving data from one table to another, whether on a single
server or between servers. The SqlBulkCopy class lets you write
managed code solutions that provide similar functionality. There are
other ways to load data into a SQL Server table (INSERT statements,
for example), but SqlBulkCopy offers a significant performance
advantage over them.
The SqlBulkCopy class can be used to write data only to SQL Server
tables. However, the data source is not limited to SQL Server; any
data source can be used, as long as the data can be loaded to a
DataTable instance or read with a IDataReader instance.
SqlBulkCopy will fail when bulk loading a DataTable column of type
SqlDateTime into a SQL Server column whose type is one of the
date/time types added in SQL Server 2008.
However, you can define Table Value Parameters in SQL Server in later versions, and use that to send a Table (DateTable) in the method you're asking. There's an example at http://sqlwithmanoj.wordpress.com/2012/09/10/passing-multipledynamic-values-to-stored-procedures-functions-part4-by-using-tvp/
Per my experience, if you're able to compile the code in C# that means the ADO.Net support that type. But if it fails when you execute the code then the target database might not support it. In your case you mention the [Sql Server 2012 Express], so it might not support it. The Table Type was supported from [Sql Server 2005] per my understanding but you had to keep the database compatibility mode to greater than 99 or something. I am 100% positive it will work in 2008 because I have used it and using it extensively to do bulk updates through the stored procedures using [User Defined Table Types] (a.k.a UDTT) as the in-parameter for the stored procedure. Again you must keep the database compatibility greater than 99 to use MERGE command for bulk updates.
And of course you can use SQLBulkCopy but not sure how reliable it is, is depending on the
Which one would be better in executing an insert statement for ms-sql database:
Sql DataAdapter or SQL Command
Object?
Which of them would be better, while inserting only one row and while inserting multiple rows?
A simple example of code usage:
SQL Command
string query = "insert into Table1(col1,col2,col3) values (#value1,#value2,#value3)";
int i;
SqlCommand cmd = new SqlCommand(query, connection);
// add parameters...
cmd.Parameters.Add("#value1",SqlDbType.VarChar).Value=txtBox1.Text;
cmd.Parameters.Add("#value2",SqlDbType.VarChar).Value=txtBox2.Text;
cmd.Parameters.Add("#value3",SqlDbType.VarChar).Value=txtBox3.Text;
cmd.con.open();
i = cmd.ExecuteNonQuery();
cmd.con.close();
SQL Data Adapter
DataRow dr = dsTab.Tables["Table1"].NewRow();
DataSet dsTab = new DataSet("Table1");
SqlDataAdapter adp = new SqlDataAdapter("Select * from Table1", connection);
adp.Fill(dsTab, "Table1");
dr["col1"] = txtBox1.Text;
dr["col2"] = txtBox5.Text;
dr["col3"] = "text";
dsTab.Tables["Table1"].Rows.Add(dr);
SqlCommandBuilder projectBuilder = new SqlCommandBuilder(adp);
DataSet newSet = dsTab.GetChanges(DataRowState.Added);
adp.Update(newSet, "Table1");
Updating a data source is much easier using DataAdapters. It's easier to make changes since you just have to modify the DataSet and call Update.
There is probably no (or very little) difference in the performance between using DataAdapters vs Commands. DataAdapters internally use Connection and Command objects and execute the Commands to perform the actions (such as Fill and Update) that you tell them to do, so it's pretty much the same as using only Command objects.
I would use LinqToSql with a DataSet for single insert and most Database CRUD requests. It is type safe, relatively fast for non compilcated queries such as the one above.
If you have many rows to insert (1000+) and you are using SQL Server 2008 I would use SqlBulkCopy. You can use your DataSet and input into a stored procedure and merge into your destination
For complicated queries I recommend using dapper in conjunction with stored procedures.
I suggest you would have some kind of control on your communication with the database. That means abstracting some code, and for that the CommandBuilder automatically generates CUD statements for you.
What would be even better is if you use that technique together with a typed Dataset. then you have intellisense and compile time check on all your columns
I have two Access 2003 databases (fooDb and barDb). There are four tables in fooDb that are linked to tables in barDb.
Two questions:
How do I update the table contents (linked tables in fooDb should be synchronized with the table contents in barDb)
How do I re-link the table to a different barDb using ADO.NET
I googled but didn't get any helpful results. What I found out is how to accomplish this in VB(6) and DAO, but I need a solution for C#.
Here is my solution to relinking DAO tables using C#.
My application uses a central MS Access database and 8 actual databases that are linked in.
The central database is stored locally to my C# app but the application allows for the 8 data databases to be located elsewhere. On startup, my C# app relinks DAO tables in the central database based on app.config settings.
Aside note, this database structure is the result of my app originally being a MS Access App which I ported to VB6. I am currently converting my app to C#. I could have moved off MS Access in VB6 or C# but it is a very easy to use desktop DB solution.
In the central database, I created a table called linkedtables with three columns TableName, LinkedTableName and DatabaseName.
On App start, I call this routine
Common.RelinkDAOTables(Properties.Settings.Default.DRC_Data
, Properties.Settings.Default.DRC_LinkedTables
, "SELECT * FROM LinkedTables");
Default.DRC_Data - Current folder of central access DB
Default.DRC_LinkedTables - Current folder of 8 data databases
Here is code does the actual relinking of the DAO Tables in C#
public static void RelinkDAOTables(string MDBfile, string filepath, string sql)
{
DataTable linkedTables = TableFromMDB(MDBfile, sql);
dao.DBEngine DBE = new dao.DBEngine();
dao.Database DB = DBE.OpenDatabase(MDBfile, false, false, "");
foreach (DataRow row in linkedTables.Rows)
{
dao.TableDef table = DB.TableDefs[row["Name"].ToString()];
table.Connect = string.Format(";DATABASE={0}{1} ;TABLE={2}", filepath, row["database"], row["LinkedName"]);
table.RefreshLink();
}
}
Additional code written to fetch data from a access database and return it as a DataTable
public static DataTable TableFromOleDB(string Connectstring, string Sql)
{
try
{
OleDbConnection conn = new OleDbConnection(Connectstring);
conn.Open();
OleDbCommand cmd = new OleDbCommand(Sql, conn);
OleDbDataAdapter adapter = new OleDbDataAdapter(cmd);
DataTable table = new DataTable();
adapter.Fill(table);
return table;
}
catch (OleDbException)
{
return null;
}
}
public static DataTable TableFromMDB(string MDBfile, string Sql)
{
return TableFromOleDB(string.Format(sConnectionString, MDBfile), Sql);
}
If you're coding in C#, then Access is not involved, only Jet. So, you can use whatever method you want to access the data and then code the updates.
I've coded this kind of thing in Access many times, and my approach for each table is:
run a query that deletes from fooDB that no longer exist in barDB.
run a query that inserts into fooDB records that are in barDB that do not yet exist in fooDB.
I always use code that writes on-the-fly SQL to update the fooDB table with the data from barDB.
The 3rd one is the hard one. I loop through the fields collection in DBA and write SQL on the fly that would be something like this:
UPDATE table2 INNER JOIN table1 ON table2.ID = table1.ID
SET table2.field1=table1.field1
WHERE (table2.field1 & "") <> (table1.field1 & "")
For numeric fields you'd have to use your available SQL dialect's function for converting Null to zero. Running Jet SQL, I'd use Nz(), of course, but that doesn't work via ODBC. Not sure if it will work with OLEDB, though.
In any event, the point is to issue a bunch of column-by-column SQL updates instead of trying to do it row by row, which will be much less efficient.