Disclaimer: I have no prior experience with querying databases from c# code (so go easy on me)
I am trying to insert data from my SQL Server database into my listbox. Right now I am trying this in the form of an array. I first connect to the database, and then insert the "state" from the database into the index of the array. I want all 50 states to be put into my array and then this information to be put into my listbox. Right now, my data is being inserted but when I view it in the list box it shows System.Data.SqlClient.SqlCommand.
public string connString = "Not displaying this for security reasons, it is set up correctly though."; //Setting up the connection to my DB
public frmState()
{
InitializeComponent();
this.FormClosed += new System.Windows.Forms.FormClosedEventHandler(this.frmState_FormClosed);
using (SqlConnection dbConn = new SqlConnection(connString))
{
dbConn.Open();
string cmdString = "select State_Name from [State]";
SqlCommand cmd = new SqlCommand(cmdString, dbConn);
SqlDataReader reader = cmd.ExecuteReader();
try
{
while (reader.Read())
{
string[] stateList = new string[50];
for (int i = 1; i <= 50; i++)
{
stateList[i - 1] = cmd.ToString();
}
for (int i = 0; i < stateList.Length; i++)
{
lbStates.Items.Add(stateList[i].ToString());
}
}
}
finally
{
reader.Close();
}
}
}
Also, I am aware that as of right now I will be showing the same state 50 times. I am trying to figure out how to insert one state at a time. Is this an efficient way of doing this? Also, any tips on working with databases in c#? I am on Visual Studio 2017 and Microsoft SQL Server 2016.
The problem comes from where you did:
stateList[i - 1] = cmd.ToString();
It's wrong because you are converting an SqlCommand object to string and putting it inside an array of type of string to retrieve data from your SqlCommand.
Changing the above line as below will fix your problem:
tateList[i - 1] = reader.GetString(0);
any tips on working with databases in c#?
for a beginner with C# and SQL, I suggest you to keep learning basic database access tools of ADO.net like using SqlDataReader, SqlDataReader, SqlDataAdapter, ... . but to have professional and of course secure application witch also needs to be simple; you have to move toward using ORM tool (witch are medium to access database securely) like "Entity Framework", linq, ... witch will make talking to database much more convenient.
Complementary:
I suggest you to reading this tutorial about how to use SqlDataReader.
I'm trying to write a method that returns a list with every row that a select query returns. But anything I can find on this is based on a single table.
This is what I'm trying:
public List<string> Select(string querystring)
{
string query = "SELECT " + querystring;
List<string> results = new List<string>();
if (this.OpenConnection())
{
MySqlCommand cmd = new MySqlCommand(query, connection);
MySqlDataReader dataReader = cmd.ExecuteReader();
while (dataReader.Read())
{
// in here I don't want a to "hard code" every column
results.Add( "returned data" );
}
dataReader.Close();
this.CloseConnection();
return results;
}
else
{
return results;
}
}
Is this possible? Do I need to make a new method for every table?
Or should I maybe return a list of objects instead of strings?
I'm really having a brainfart here so any help is appreciated.
As I stated from my comment:
I'm not sure what you're trying to do exactly... it looks like you're hoping to put a rows and columns (your data set) into just rows (your list of string). How are you planning on using this data? It will not be very useable trying to use it in the manner it looks like you're attempting. You can reference your data readers columns by index, but again I'm not clear on what you're hoping to accomplish.
You can however do it by doing something like this:
while (reader.Read())
{
// for loop with a maximum iteration of the number of columns in the reader.
for (int i=0;i<reader.FieldCount;i++)
{
results.Add(reader[i].ToString());
}
}
but again, I don't think this will get you data in a useable manner.
I would recommend you to look into Dapper.NET or EF6 or nHibernate.
Process
I am writing a C# application which will need to retrieve 4 million records(ID) from a SQL table in database A.
I then need to use each ID to select a row of record each from another SQL table in database B.
Once I have this row I then need to update another SQL table in database C
Questions
What’s the most efficient way to retrieve and store the data in Step 1?
a. Should I load this in a list string?
b. Do you recommend doing batches initially?
What the most efficient way to achieve steps 2 and 3
To retrieve the 4M records you're going to want to use a SqlDataReader - it only loads one row of data into memory at a time.
var cn = new SqlConnection("some connection string");
var cmd = new SqlCommand("SELECT ID FROM SomeTable", cn);
var reader = cmd.ExecuteReader();
while (reader.Read())
{
var id = reader.GetInt32(0);
// an so on
}
reader.Close();
reader.Dispose();
cn.Close();
Now, to handle two and three, I would leverage a DataTable for the row you need to retrieve and then a SqlCommand on the third database. This means that inside the reader.Read() you can get the one row you need by filling a DataTable with a SqlDataAdapter and the issuing an ExecNonQuery against a SqlCommand for the UPDATE statement.
Another way of writing the above, and it's a bit safer, is to use the using statement:
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
var id = reader.GetInt32(0);
// an so on
}
}
that will eliminate the need for:
reader.Close();
reader.Dispose();
and so you could also issue that for the SqlConnection if you wanted.
SQLBulkCopy class might help.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx
Is it possible to save a DataTable into SQL database in one cell of type binary for example and read it back again into a DataTable?
I would create, if possible, a xml field inside the sql database and save the datatable as xml
XML Support in Microsoft SQL Server 2005
and
C# and Vb.net example for XML data type tips in SQL Server 2005
should help you
another example took from here
protected bool LoadXml(SqlConnection cn, XmlDocument doc)
{
//Reading the xml from the database
string sql = #"SELECT Id, XmlField FROM TABLE_WITH_XML_FIELD WHERE Id = #Id";
SqlCommand cm = new SqlCommand(sql, cn);
cm.Parameters.Add(new SqlParameter("#Id",1));
using (SqlDataReader dr = cm.ExecuteReader())
{
if (dr.Read())
{
SqlXml MyXml= dr.GetSqlXml(dr.GetOrdinal("XmlField"));
doc.LoadXml( MyXml.Value);
return true;
}
else
{
return false;
}
}
}
why on earth would you want to?
If this is an operation you are going to do more than once, just save it out to a new sql table and read from the table into your DataTable later.
It just breaks normalization rules- the value in the cell is not atomic.
I break that rule myself all the time, but still, it's important to understand alternative approaches.
You could have a related table to store their answers instead of storing all their answer values in a single cell.
Regarding your comment below, you can still do it with a related table. Just use a three-column table: tableID, fieldID, value. Each tableID can have its own set of fieldIDs. The trade-off is with the value datatype- it needs to be a string, which means you don't get the advantages of date or numeric data type enforcement on your back end.
I have a DataSet populated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdr table where LeadId is PK.
I am having following error while executing the code below:
The given ColumnMapping does not match up with any column in the
source or destination
string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();
using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
{
if (MySql.State==ConnectionState.Closed)
{
MySql.Open();
}
s.DestinationTableName = "PCRM_Lead_Hdr";
s.NotifyAfter = 10000;
#region Comment
s.ColumnMappings.Clear();
#region ColumnMapping
s.ColumnMappings.Add("ClientID", "ClientID");
s.ColumnMappings.Add("LeadID", "LeadID");
s.ColumnMappings.Add("Company_Name", "Company_Name");
s.ColumnMappings.Add("Website", "Website");
s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
s.ColumnMappings.Add("Revenue", "Revenue");
s.ColumnMappings.Add("Address", "Address");
s.ColumnMappings.Add("City", "City");
s.ColumnMappings.Add("State", "State");
s.ColumnMappings.Add("ZipCode", "ZipCode");
s.ColumnMappings.Add("CountryId", "CountryId");
s.ColumnMappings.Add("Phone", "Phone");
s.ColumnMappings.Add("Fax", "Fax");
s.ColumnMappings.Add("TimeZone", "TimeZone");
s.ColumnMappings.Add("SicNo", "SicNo");
s.ColumnMappings.Add("SicDesc", "SicDesc");
s.ColumnMappings.Add("SourceID", "SourceID");
s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
s.ColumnMappings.Add("BasketID", "BasketID");
s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");
s.ColumnMappings.Add("SurveyId", "SurveyId");
s.ColumnMappings.Add("NextCallDate", "NextCallDate");
s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
s.ColumnMappings.Add("AssignedDate", "AssignedDate");
s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
s.ColumnMappings.Add("Remove", "Remove");
s.ColumnMappings.Add("Release", "Release");
s.ColumnMappings.Add("Insert_Date", "Insert_Date");
s.ColumnMappings.Add("Insert_By", "Insert_By");
s.ColumnMappings.Add("Updated_Date", "Updated_Date");
s.ColumnMappings.Add("Updated_By", "Updated_By");
#endregion
#endregion
s.WriteToServer(sourceTable);
s.Close();
MySql.Close();
}
I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.
Well, is it right? Do the column names exist on both sides?
To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopy into the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:
no issues with live data corruption if the import fails at any point
I can put a transaction just around the SPROC
I can have the bcp work without logging, safe in the knowledge that the SPROC will be logged
it is simple ;-p (no messing with mappings)
As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader (since this is a streaming API, where-as DataTable is a buffered API). For example, I tend to hook CSV imports up using CsvReader as the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReader to present each first-level element as a row in an IDataReader - very fast.
The answer by Marc would be my recomendation (on using staging table). This ensures that if your source doesn't change, you'll have fewer issues importing in the future.
However, in my experience, you can check the following issues:
Column names match in source and table
That the column types match
If you think you did this and still no success. You can try the following.
1 - Allow nulls in all columns in your table
2 - comment out all column mappings
3 - rerun adding one column at a time until you find where your issue is
That should bring out the bug
One of the reason is that :SqlBukCOpy is case sensitive . Follow steps:
In that Case first you have to find your column in Source Table by
using "Contain" method in C#.
Once your Destination column matched with source column get index of
that column and give its column name in SqlBukCOpy .
For Example:`
//Get Column from Source table
string sourceTableQuery = "Select top 1 * from sourceTable";
DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw
for (int i = 0; i < destinationTable.Columns.Count; i++)
{ //check if destination Column Exists in Source table
if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
{
int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
}
}
bulkCopy.WriteToServer(destinationTable);
bulkCopy.Close();
I would go with the staging idea, however here is my approach to handling the case sensitive nature. Happy to be critiqued on my linq
using (SqlConnection connection = new SqlConnection(conn_str))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
foreach (var column in dt.Columns)
{
if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
{
var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
bulkCopy.ColumnMappings.Add(column.ToString(), tc);
}
}
// Write from the source to the destination.
bulkCopy.WriteToServer(dt);
bulkCopy.Close();
}
}
and the helper method
private static IEnumerable<string> GetSchema(string connectionString, string tableName)
{
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "sp_Columns";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("#table_name", SqlDbType.NVarChar, 384).Value = tableName;
connection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
yield return (string)reader["column_name"];
}
}
}
}
What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.
Thought a long time about answering...
Even if column names are case equally, if the data type differs
you get the same error. So check column names and their data type.
P.S.: staging tables are definitive the way to import.