Insert data into database table using parameters for columns names - c#

I want to insert data into table using SP. But I have database table column name in parameters. I want to use parameters as column in SP to insert data. You have any idea to insert data through column name as parameter.
cmd3.Connection = conn;
cmd3.CommandType = CommandType.StoredProcedure;
cmd3.CommandText = "CustomerInfoProcedure";
cmd2.Parameters.AddWithValue("#Col0", Col0);
cmd2.Parameters.AddWithValue("#Col1", Col1);
cmd2.Parameters.AddWithValue("#Col2", Col2);
//insert query using in SP, but its give error
insert into #AgentDetails (#Col0, #Col1, #Col2)
values (#eCol0, #eCol1, #eCol2);

To my knowledge (although I haven't worked with SQL Server for a while) you can't really do that directly. If you insist on having it this way, you have a couple of (bad) options:
Use sp_executesql and build your query dynamically by concatenating relevant strings (MSDN). This approach will likely result in fairly slow queries (since they can't be optimized before they are generated) and has huge security downsides (think SQL injection).
Have a set of prepared queries inside your SP that cover all possible combinations of the input parameters. This will alleviate performance and security concerns, but it will leave you (depending on the number of field combinations you want to have) with a huge and complicated code in your SP that will be hard to maintain.
UPD:
After seeing your comment: in your case it will be much better to handle column reordering in the code and only have a single signature for the SP. E.g.
var value0 = Col0 == "field0" ? eCol0 : Col1 == "field0" ? eCol1 : eCol2;
var value1 = ...
cmd3.Connection = conn;
cmd3.CommandType = CommandType.StoredProcedure;
cmd3.CommandText = "CustomerInfoProcedure";
cmd2.Parameters.AddWithValue("#value0", value0);
cmd2.Parameters.AddWithValue("#value1", value1);
cmd2.Parameters.AddWithValue("#value2", value2);
//insert query using in SP
insert into #AgentDetails (field0, field1, field2)
values (#value0, #value1, #value2);
UPD2
If you have a large number of variables, then similar approach would work as long as all the values and column-field mapping are stored in appropriate data structures. E.g. let's say that you have an array of column names in the order they follow in your spreadsheet, e.g. taken from a spreadsheet header:
string[] columnsNames; // ["field1", "field2", "field10", "field0", ...]
, an array of values that you need to insert, per row, at the same order as the columnsNames:
string[] values; // ["value1", "value2", "value10", "value0", ...]
and an array where the column names are listed in the order they need to be in for your SP parameters:
// This list can be made into a constant, but you need
// to keep it in sync with the SP signature
string[] parameterOrder = ["field0", "field1", "field2", ...];
In this case you can use logic like this one to add your data right into the correct place:
// This dictionary will be used for field position lookups
var columnOrderDict = new Dictionary<string, int>();
for (var i = 0; i < columnsNames.Length; i++)
{
columnOrderDict[columnsNames[i]] = i;
}
cmd3.Connection = conn;
cmd3.CommandType = CommandType.StoredProcedure;
cmd3.CommandText = "CustomerInfoProcedure";
for (var j = 0; j < parameterOrder.Length; j++)
{
var currentFieldName = parameterOrder[j];
if (columnOrderDict.ContainsKey(currentFieldName))
{
cmd3.Parameters.AddWithValue(currentFieldName, values[columnOrderDict[currentFieldname]]);
} else {
cmd3.Parameters.AddWithValue(currentFieldName, DBNull.Value);
}
}
This code is built on multiple assumptions, such as that the column headers in your spreadsheet will exactly match the stored procedure parameter names etc, but I hope it should give you enough of a hint to build your own logic.
Also don't forget proper validation - currently the only thing this code guards against is the situation when a field that's needed by SP is missing from the input data. You also need to validate data format, the number of values should match the number of headers etc. etc.

Related

cmd.executescalar() works but throws ORA-25191 Exception

my Code is working, the function gives me the correct Select count (*) value but anyway, it throws an ORA-25191 Exception - Cannot reference overflow table of an index-organized table tips,
at retVal = Convert.ToInt32(cmd.ExecuteScalar());
Since I use the function very often, the exceptions slow down my program tremendously.
private int getSelectCountQueryOracle(string Sqlquery)
{
try
{
int retVal = 0;
using (DataTable dataCount = new DataTable())
{
using (OracleCommand cmd = new OracleCommand(Sqlquery))
{
cmd.CommandType = CommandType.Text;
cmd.Connection = oraCon;
using (OracleDataAdapter dataAdapter = new OracleDataAdapter())
{
retVal = Convert.ToInt32(cmd.ExecuteScalar());
}
}
}
return retVal;
}
catch (Exception ex)
{
exceptionProtocol("Count Function", ex.ToString());
return 1;
}
}
This function is called in a foreach loop
// function call in foreach loop which goes through tablenames
foreach (DataRow row in dataTbl.Rows)
{...
tableNameFromRow = row["TABLE_NAME"].ToString();
tableRows=getSelectCountQueryOracle("select count(*) as 'count' from " +tableNameFromRow);
tableColumns = getSelectCountQueryOracle("SELECT COUNT(*) as 'count' FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name='" + tableNameFromRow + "'");
...}
dataTbl.rows in this outer loop, in turn, comes from the query
SELECT * FROM USER_TABLES ORDER BY TABLE_NAME
If you're using a database-agnostic API like ADO.Net, you would almost always want to use the API's framework to fetch metadata rather than writing custom queries against each database's metadata tables. The various ADO.Net providers are much more likely to write data dictionary queries that handle all the various corner cases and are much more likely to be optimized than the queries you're likely to write. So rather than writing your own query to populate the dataTbl data table, you'd want to use the GetSchema method
DataTable dataTbl = connection.GetSchema("Tables");
If you want to keep your custom-coded data dictionary query for some reason, you'd need to filter out the IOT overflow tables since you can't query those directly.
select *
from user_tables
where iot_type IS NULL
or iot_type != 'IOT_OVERFLOW'
Be aware, however, that there are likely to be other tables that you don't want to try to get a count from. For example, the dropped column indicates whether a table has been dropped-- presumably, you don't want to count the number of rows in an object in the recycle bin. So you'd want a dropped = 'NO' predicate as well. And you can't do a count(*) on a nested table so you'd want to have a nested = 'NO' predicate as well if your schema happens to contain nested tables. There are probably other corner cases depending on the exact set of features your particular schema makes use of that the developers of the provider have added code for that you'd have to deal with.
So I'd start with
select *
from user_tables
where ( iot_type IS NULL
or iot_type != 'IOT_OVERFLOW')
and dropped = 'NO'
and nested = 'NO'
but know that you'll probably need/ want to add some additional filters depending on the specific features users make use of. I'd certainly much rather let the fine folks that develop the ADO.Net provider worry about all those corner cases than to deal with finding all of them myself.
Taking a step back, though, I'd question why you're regularly doing a count(*) on every table in a schema and why you need an exact answer. In most cases where you're doing counts, you're either doing a one-off where you don't much care how long it takes (i.e. a validation step after a migration) or approximate counts would be sufficient (i.e. getting a list of the biggest tables in the system in order to triage some effort or to track growth over time for projections) in which case you could just use the counts that are already stored in the data dictionary- user_tables.num_rows- from the last time that statistics were run.
This article helped me to solve my problem.
I've changed my query to this:
SELECT * FROM user_tables
WHERE iot_type IS NULL OR iot_type != 'IOT_OVERFLOW'
ORDER BY TABLE_NAME

Why does my SQL update for 20.000 records take over 5 minutes?

I have a piece of C# code, which updates two specific columns for ~1000x20 records in a database on the localhost. As I know (though I am really far from being a database expert), it should not take long, but it takes more than 5 minutes.
I tried SQL Transactions, with no luck. SqlBulkCopy seems a bit overkill, since it's a large table with dozens of columns, and I only have to update 1/2 column for a set of records, so I would like to keep it simple. Is there a better approach to improve efficiency?
The code itself:
public static bool UpdatePlayers(List<Match> matches)
{
using (var connection = new SqlConnection(Database.myConnectionString))
{
connection.Open();
SqlCommand cmd = connection.CreateCommand();
foreach (Match m in matches)
{
cmd.CommandText = "";
foreach (Player p in m.Players)
{
// Some player specific calculation, which takes almost no time.
p.Morale = SomeSpecificCalculationWhichMilisecond();
p.Condition = SomeSpecificCalculationWhichMilisecond();
cmd.CommandText += "UPDATE [Players] SET [Morale] = #morale, [Condition] = #condition WHERE [ID] = #id;";
cmd.Parameters.AddWithValue("#morale", p.Morale);
cmd.Parameters.AddWithValue("#condition", p.Condition);
cmd.Parameters.AddWithValue("#id", p.ID);
}
cmd.ExecuteNonQuery();
}
}
return true;
}
Updating 20,000 records one at a time is a slow process, so taking over 5 minutes is to be expected.
From your query, I would suggest putting the data into a temp table, then joining the temp table to the update. This way it only has to scan the table to update once, and update all values.
Note: it could still take a while to do the update if you have indexes on the fields you are updating and/or there is a large amount of data in the table.
Example update query:
UPDATE P
SET [Morale] = TT.[Morale], [Condition] = TT.[Condition]
FROM [Players] AS P
INNER JOIN #TempTable AS TT ON TT.[ID] = P.[ID];
Populating the temp table
How to get the data into the temp table is up to you. I suspect you could use SqlBulkCopy but you might have to put it into an actual table, then delete the table once you are done.
If possible, I recommend putting a Primary Key on the ID column in the temp table. This may speed up the update process by making it faster to find the related ID in the temp table.
Minor improvements;
use a string builder for the command text
ensure your parameter names are actually unique
clear your parameters for the next use
depending on how many players in each match, batch N commands together rather than 1 match.
Bigger improvement;
use a table value as a parameter and a merge sql statement. Which should look something like this (untested);
CREATE TYPE [MoraleUpdate] AS TABLE (
[Id] ...,
[Condition] ...,
[Morale] ...
)
GO
MERGE [dbo].[Players] AS [Target]
USING #Updates AS [Source]
ON [Target].[Id] = [Source].[Id]
WHEN MATCHED THEN
UPDATE SET SET [Morale] = [Source].[Morale],
[Condition] = [Source].[Condition]
DataTable dt = new DataTable();
dt.Columns.Add("Id", typeof(...));
dt.Columns.Add("Morale", typeof(...));
dt.Columns.Add("Condition", typeof(...));
foreach(...){
dt.Rows.Add(p.Id, p.Morale, p.Condition);
}
SqlParameter sqlParam = cmd.Parameters.AddWithValue("#Updates", dt);
sqlParam.SqlDbType = SqlDbType.Structured;
sqlParam.TypeName = "dbo.[MoraleUpdate]";
cmd.ExecuteNonQuery();
You could also implement a DbDatareader to stream the values to the server while you are calculating them.

How to Insert to multi MYSQL tables C# fast?

I am trying to insert some data into two MYSQL tables.
the second table stores the first table row Id as a foreign key.
I have this code that works fine but it is super slow. what is the best/fastest way to make it faster?
string ConnectionString = "server=localhost; password = 1234; database = DB ; user = Jack";
MySqlConnection mConnection = new MySqlConnection(ConnectionString);
mConnection.Open();
int index = 1;
for (int i = 0; i < 100000; i++)
{
string insertPerson = "INSERT INTO myentities(Name) VALUES (#first_name);"
+ "INSERT INTO secondtable(Id, Name,myentities) VALUES (#ID, #city, LAST_INSERT_ID());";
MySqlCommand command = new MySqlCommand(insertPerson, mConnection);
command.Parameters.AddWithValue("#first_name", "Jack");
command.Parameters.AddWithValue("#ID", i+1);
command.Parameters.AddWithValue("#city", "Frank");
command.ExecuteNonQuery();
command.Parameters.Clear();
}
I have found the following code on one of the StackoverFlow questions but it was inserting data to a single table only, not to multiple tables which are connected through a foreign key.
This code is pretty fast, but I was not sure how I can make it work with multiple tables.
public static void BulkToMySQL()
{
string ConnectionString = "server=192.168.1xxx";
StringBuilder sCommand = new StringBuilder("INSERT INTO User (FirstName, LastName) VALUES ");
using (MySqlConnection mConnection = new MySqlConnection(ConnectionString))
{
List<string> Rows = new List<string>();
for (int i = 0; i < 100000; i++)
{
Rows.Add(string.Format("('{0}','{1}')", MySqlHelper.EscapeString("test"), MySqlHelper.EscapeString("test")));
}
sCommand.Append(string.Join(",", Rows));
sCommand.Append(";");
mConnection.Open();
using (MySqlCommand myCmd = new MySqlCommand(sCommand.ToString(), mConnection))
{
myCmd.CommandType = CommandType.Text;
myCmd.ExecuteNonQuery();
}
}
}
The fastest way possible is to craft a strategy for not calling mysql in a loop via the .NET MySQL Connector. Especially for i=0 to 99999 . The way you achieve this is either thru CASE A: direct db table manipulation or CASE B: thru CSV to db imports with LOAD DATA INFILE.
For CASE B: it is often wise to bring that data into a staging table or tables. Checks can be made for data readiness depending on the particular circumstances. What that means is that you may be getting external data that needs scrubbed (ETL). Other benefits include not committing unholy data to your production tables not fit for consumption. So it leaves an abort option open to you.
Now onto performance anecdotes. With MySQL and the .NET Connector version 6.9.9.0 in late 2016, I can achieve up to 40x performance gains by going this route. It may seem unnatural not to call an INSERT query but I don't in loops. Ok, sure, in small loops, but not in data ingest with bulk. Not even for 500 rows. You will experience noticable UX improvement if you re-craft some routines.
So the above is for data that truly came from external sources. For CASE A: the normal data that is already in your db the above does not apply. In those situations you strive to craft your SQL to massage your data as much as possible (read: 100%) on the server-side. As such it does so without bringing the data back to the client thus requiring some client-side with Connector looping call to get it back into the server. This does not mandate Stored Procedures necessarily or at all. Client-side calls that operate on the data in place without toward client transfers then back up are what you shoot for.
You can gain some improvement by moving unnecessary operations out of the loop, since anything you do there is repeated 100,000 times:
string insertPerson =
"INSERT INTO myentities(Name) VALUES (#first_name);"
+ "INSERT INTO secondtable(Id, Name,myentities) VALUES (#ID, #city, LAST_INSERT_ID());";
string ConnectionString = "server=localhost; password = 1234; database = DB ; user = Jack";
using (var Connection = new MySqlConnection(ConnectionString))
using (var command = new MySqlCommand(insertPerson, mConnection))
{
//guessing at column types and lengths here
command.Parameters.Add("#first_name", MySqlDbType.VarChar, 50).Value = "Jack";
var id = command.Parameters.Add("#ID", MySqlDbType.Int32);
command.Parameters.Add("#city", MySqlDbType.VarChar, 50).Value = "Frank";
mConnection.Open();
for (int i = 1; i <= 100000; i++)
{
id.Value = i;
command.ExecuteNonQuery();
}
}
But mostly, you try to avoid this scenario. Instead, you'd do something like use a numbers table to project the results for both tables in advance. There are some things you can do with foreign key constraints to set locking (you need to lock the whole table to avoid bad keys if someone else inserts or tries to read partially inserted records), transaction logging (you can set it only log the batch, rather than each change) and foreign keys enforcment (you can turn it off while you handle the insert).

Convert C# SQL Loop to Linq

I have a list Called ListTypes that holds 10 types of products. Below the store procedure loops and gets every record with the product that is looping and it stores it in the list ListIds. This is killing my sql box since I have over 200 users executing this constantly all day.
I know is not a good architecture to loop a sql statement, but this the only way I made it work. Any ideas how I can make this without looping? Maybe a Linq statement, I never used Linq with this magnitude. Thank you.
protected void GetIds(string Type, string Sub)
{
LinkedIds.Clear();
using (SqlConnection cs = new SqlConnection(connstr))
{
for (int x = 0; x < ListTypes.Count; x++)
{
cs.Open();
SqlCommand select = new SqlCommand("spUI_LinkedIds", cs);
select.CommandType = System.Data.CommandType.StoredProcedure;
select.Parameters.AddWithValue("#Type", Type);
select.Parameters.AddWithValue("#Sub", Sub);
select.Parameters.AddWithValue("#TransId", ListTypes[x]);
SqlDataReader dr = select.ExecuteReader();
while (dr.Read())
{
ListIds.Add(Convert.ToInt32(dr["LinkedId"]));
}
cs.Close();
}
}
}
Not a full answer, but this wouldn't fit in a comment. You can at least update your existing code to be more efficient like this:
protected List<int> GetIds(string Type, string Sub, IEnumerable<int> types)
{
var result = new List<int>();
using (SqlConnection cs = new SqlConnection(connstr))
using (SqlCommand select = new SqlCommand("spUI_LinkedIds", cs))
{
select.CommandType = System.Data.CommandType.StoredProcedure;
//Don't use AddWithValue! Be explicit about your DB types
// I had to guess here. Replace with the actual types from your database
select.Parameters.Add("#Type", SqlDBType.VarChar, 10).Value = Type;
select.Parameters.Add("#Sub", SqlDbType.VarChar, 10).Value = Sub;
var TransID = select.Parameters.Add("#TransId", SqlDbType.Int);
cs.Open();
foreach(int type in types)
{
TransID.Value = type;
SqlDataReader dr = select.ExecuteReader();
while (dr.Read())
{
result.Add((int)dr["LinkedId"]);
}
}
}
return result;
}
Note that this way you only open and close the connection once. Normally in ADO.Net it's better to use a new connection and re-open it for each query. The exception is in a tight loop like this. Also, the only thing that changes inside the loop this way is the one parameter value. Finally, it's better to design methods that don't rely on other class state. This method no longer needs to know about the ListTypes and ListIds class variables, which makes it possible to (among other things) do better unit testing on the method.
Again, this isn't a full answer; it's just an incremental improvement. What you really need to do is write another stored procedure that accepts a table valued parameter, and build on the query from your existing stored procedure to JOIN with the table valued parameter, so that all of this will fit into a single SQL statement. But until you share your stored procedure code, this is about as much help as I can give you.
Besides the improvements others wrote.
You could insert your ID's into a temp table and then make one
SELECT * from WhatEverTable WHERE transid in (select transid from #tempTable)
On a MSSQL this works really fast.
When you're not using a MSSQL it could be possible that one great SQL-Select with joins is faster than a SELECT IN. You have to test these cases by your own on your DBMS.
According to your comment:
The idea is lets say I have a table and I have to get all records from the table that has this 10 types of products. How can I get all of this products? But this number is dynamic.
So... why use a stored procedure at all? Why not query the table?
//If [Type] and [Sub] arguments are external inputs - as in, they come from a user request or something - they should be sanitized. (remove or escape '\' and apostrophe signs)
//create connection
string queryTmpl = "SELECT LinkedId FROM [yourTable] WHERE [TYPE] = '{0}' AND [SUB] = '{1}' AND [TRANSID] IN ({2})";
string query = string.Format(queryTmpl, Type, Sub, string.Join(", ", ListTypes);
SqlCommand select = new SqlCommand(query, cs);
//and so forth
To use Linq-to-SQL you would need to map the table to a class. This would make the query simpler to perform.

SqlBulkCopy Not Working

I have a DataSet populated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdr table where LeadId is PK.
I am having following error while executing the code below:
The given ColumnMapping does not match up with any column in the
source or destination
string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();
using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
{
if (MySql.State==ConnectionState.Closed)
{
MySql.Open();
}
s.DestinationTableName = "PCRM_Lead_Hdr";
s.NotifyAfter = 10000;
#region Comment
s.ColumnMappings.Clear();
#region ColumnMapping
s.ColumnMappings.Add("ClientID", "ClientID");
s.ColumnMappings.Add("LeadID", "LeadID");
s.ColumnMappings.Add("Company_Name", "Company_Name");
s.ColumnMappings.Add("Website", "Website");
s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
s.ColumnMappings.Add("Revenue", "Revenue");
s.ColumnMappings.Add("Address", "Address");
s.ColumnMappings.Add("City", "City");
s.ColumnMappings.Add("State", "State");
s.ColumnMappings.Add("ZipCode", "ZipCode");
s.ColumnMappings.Add("CountryId", "CountryId");
s.ColumnMappings.Add("Phone", "Phone");
s.ColumnMappings.Add("Fax", "Fax");
s.ColumnMappings.Add("TimeZone", "TimeZone");
s.ColumnMappings.Add("SicNo", "SicNo");
s.ColumnMappings.Add("SicDesc", "SicDesc");
s.ColumnMappings.Add("SourceID", "SourceID");
s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
s.ColumnMappings.Add("BasketID", "BasketID");
s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");
s.ColumnMappings.Add("SurveyId", "SurveyId");
s.ColumnMappings.Add("NextCallDate", "NextCallDate");
s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
s.ColumnMappings.Add("AssignedDate", "AssignedDate");
s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
s.ColumnMappings.Add("Remove", "Remove");
s.ColumnMappings.Add("Release", "Release");
s.ColumnMappings.Add("Insert_Date", "Insert_Date");
s.ColumnMappings.Add("Insert_By", "Insert_By");
s.ColumnMappings.Add("Updated_Date", "Updated_Date");
s.ColumnMappings.Add("Updated_By", "Updated_By");
#endregion
#endregion
s.WriteToServer(sourceTable);
s.Close();
MySql.Close();
}
I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.
Well, is it right? Do the column names exist on both sides?
To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopy into the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:
no issues with live data corruption if the import fails at any point
I can put a transaction just around the SPROC
I can have the bcp work without logging, safe in the knowledge that the SPROC will be logged
it is simple ;-p (no messing with mappings)
As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader (since this is a streaming API, where-as DataTable is a buffered API). For example, I tend to hook CSV imports up using CsvReader as the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReader to present each first-level element as a row in an IDataReader - very fast.
The answer by Marc would be my recomendation (on using staging table). This ensures that if your source doesn't change, you'll have fewer issues importing in the future.
However, in my experience, you can check the following issues:
Column names match in source and table
That the column types match
If you think you did this and still no success. You can try the following.
1 - Allow nulls in all columns in your table
2 - comment out all column mappings
3 - rerun adding one column at a time until you find where your issue is
That should bring out the bug
One of the reason is that :SqlBukCOpy is case sensitive . Follow steps:
In that Case first you have to find your column in Source Table by
using "Contain" method in C#.
Once your Destination column matched with source column get index of
that column and give its column name in SqlBukCOpy .
For Example:`
//Get Column from Source table
string sourceTableQuery = "Select top 1 * from sourceTable";
DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw
for (int i = 0; i < destinationTable.Columns.Count; i++)
{ //check if destination Column Exists in Source table
if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
{
int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
}
}
bulkCopy.WriteToServer(destinationTable);
bulkCopy.Close();
I would go with the staging idea, however here is my approach to handling the case sensitive nature. Happy to be critiqued on my linq
using (SqlConnection connection = new SqlConnection(conn_str))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
foreach (var column in dt.Columns)
{
if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
{
var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
bulkCopy.ColumnMappings.Add(column.ToString(), tc);
}
}
// Write from the source to the destination.
bulkCopy.WriteToServer(dt);
bulkCopy.Close();
}
}
and the helper method
private static IEnumerable<string> GetSchema(string connectionString, string tableName)
{
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "sp_Columns";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("#table_name", SqlDbType.NVarChar, 384).Value = tableName;
connection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
yield return (string)reader["column_name"];
}
}
}
}
What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.
Thought a long time about answering...
Even if column names are case equally, if the data type differs
you get the same error. So check column names and their data type.
P.S.: staging tables are definitive the way to import.

Categories