i have the code below trying to do a bulk copy from oracle to SQL server 2005 and it keeps timing out. how can i extend the oracle connection timeout? it seems i can not from what i read on the web.
OracleConnection source = new OracleConnection(GetOracleConnectionString());
source.Open();
SqlConnection dest = new SqlConnection(GetSQLConnectionString() );
dest.Open();
OracleCommand sourceCommand = new OracleCommand(#"select * from table");
using (OracleDataReader dr = sourceCommand.ExecuteReader())
{
using (SqlBulkCopy s = new SqlBulkCopy(dest))
{
s.DestinationTableName = "Defects";
s.NotifyAfter = 100;
s.SqlRowsCopied += new SqlRowsCopiedEventHandler(s_SqlRowsCopied);
s.WriteToServer(dr);
s.Close();
}
}
source.Close();
dest.Close();
here is my oracle connection string:
return "User Id=USER;Password=pass;Data Source=(DESCRIPTION=" +
"(ADDRESS=(PROTOCOL=TCP)(HOST=14.12.7.2)(PORT=1139))" +
"(CONNECT_DATA=(SID=QCTRP1)));";
You can set s.BulkCopyTimeout option
In your connection string, there is a 'Connection Lifetime' and 'Connection Timeout' parameter. You can set it accordingly. See here for the full reference.
BTW, I know you didn't ask this, but have you considered an ETL tool for migrating your DB records (e.g. Informatica, FME, etc.)? While your approach is valid, it isn't going to be very performant since you are hydrating all of the records from one DB to the client and then serializing them to another DB. For small bulk sets, this isn't a big issue, but if you were processing hundreds of thousands of rows, you might want to consider an official ETL tool.
Related
First of all: we have an application that is build heavily around the legacy DataTable type. Because of this, we cannot switch to e. g. EF now. This will be a future project. In the meantime we need to build a new server-sided REST based solution as a replacement for the legacy server logics.
Problem: SqlDataAdapter.Update(DataTable) does not update the data in the database:
New records: get inserted successfully in DB
Modified records: above Update() method returns correct count, but the change is not in DB
Deleted records: above Update() method returns 0 count and therefore throws concurrency exception (which is by design of the data adapter and not correct here)
Supposed Cause: As the DataTable is fetched by the server application on request of a client, but then transmitted to the client and back to the server before it gets written to the DB, SqlDataAdapter seems to not detect them properly as changes:
Client requests data
Server fetches data from database
Data is transmitted serialized via REST to the client
Client works on data
Changed data is transmitted serialized via REST to server
Server instantiates a new instance of SqlDataAdapter and makes SqlDataAdapter.Update() on this received data
Data integrity:
the correct RowState of each record is present on the server side, when it makes the SqlDataAdapter.Update()
the client transmits changed records only to the server, for efficiency reasons
all of the tables have a PK
none of the tables have FK relations (this is/was the legacy design rule)
Is it possible to somehow achieve (server-sided) SqlDataAdapter.Update() on "foreign" data or is this method designed for direct (client) updates to the database of the original data only?
Common Errors: of course I heavily searched for this issue already and took care of correct population of the sql command properties.
Server-sided code part:
public override int[] SaveDataTable(IEnumerable<DataTable> dataTables)
{
var counts = new Queue<int>();
using (_connection = new SqlConnection(ConnectionString))
{
_connection.Open();
var transaction = _connection.BeginTransaction();
try
{
foreach (var table in dataTables)
{
//var command = new SqlCommand();
var command = _connection.CreateCommand();
using (command)
{
command.Connection = _connection;
command.Transaction = transaction;
command.CommandText = Global.GetSelectStatement(table);
var dataAdapter = new SqlDataAdapter(command);
var cmdBuilder = new SqlCommandBuilder(dataAdapter);
dataAdapter.UpdateCommand = cmdBuilder.GetUpdateCommand();
dataAdapter.InsertCommand = cmdBuilder.GetInsertCommand();
dataAdapter.DeleteCommand = cmdBuilder.GetDeleteCommand();
//dataAdapter.SelectCommand = command;
//var dSet = new DataSet();
//dataAdapter.Fill(dSet);
//dataAdapter.Fill(table);
//dataAdapter.Fill(new DataTable());
//var clone = table.Copy();
//clone.AcceptChanges();
//dataAdapter.Fill(clone);
counts.Enqueue(dataAdapter.Update(table));
}
}
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback(); //this may throw also
throw;
}
}
return counts.ToArray();
}
ok, so the quest is solved. There was nothing wrong with the implementation of the SqlDataAdapter (except the improvement advises from the comments of course).
The problem was in the client application code in always calling AcceptChanges() to reduce the amount of data. Prior to sending changed data to the data access layer, the RowState of each rows were "restored" with DataRow.SetModified(), etc.
This causes the problem of SqlDataAdapter.Update().
Of course this is logical, as the original DataRowVersion is lost then. But this wasn't easy to identify.
I am trying to insert 200,000 documents from a folder into the SQL Server database into a varbinary column. I get a timeout expiration message after inserting 80,000 documents. Average file size is about 250kb and max file size is 50MB. I am running this c# program on the server where the database is located.
Please suggest.
The error:
The timeout period elapsed prior to completion of the operation or the server is not responding.
The code:
string spath = #"c:\documents";
string[] files = Directory.GetFiles(spath, "*.*", SearchOption.AllDirectories);
Console.Write("Files Count:" + files.Length);
using (SqlConnection con = new SqlConnection(connectionString))
{
con.Open();
string insertSQL = "INSERT INTO table_Temp(doc_content, doc_path) values(#File, #path)";
SqlCommand cmd = new SqlCommand(insertSQL, con);
var pFile = cmd.Parameters.Add("#File", SqlDbType.VarBinary, -1);
var pPath = cmd.Parameters.Add("#path", SqlDbType.Text);
var tran = con.BeginTransaction();
var fn = 0;
foreach (string docPath in files)
{
string newPath = docPath.Remove(0, spath.Length);
string archive = new DirectoryInfo(docPath).Parent.Name;
fn += 1;
using (var stream = new FileStream(docPath, FileMode.Open, FileAccess.Read))
{
pFile.Value = stream;
pPath.Value = newPath;
cmd.Transaction = tran;
cmd.ExecuteNonQuery();
if (fn % 10 == 0)
{
tran.Commit();
tran = con.BeginTransaction();
Console.Write("|");
}
Console.Write(".");
}
}
tran.Commit();
}
For this, I would suggest using SQLBulkCopy methods, since it should be able to handle data insertion much more easily. Further, as others have pointed out, you might want to increase the timeout condition for your command.
While I would agree this may be best for a bulk copy of some sort, if you must do this in the c# program, your only option is probably to increase the timeout value. You can do this after your SqlCommand object has been created via cmd.CommandTimeout = <new timeout>; The property CommandTimeout is an integer representing the number seconds for the timeout, or zero if you never want it to timeout.
See the MSDN docs for details
You should be able to set the timeout for the transaction directly on that object in your application code, this way you are not changing sql server settings.
Secondly, you can build batches in your application also. You say you can get 80k docs before a timeout, set your batch size at 50k, process them, commit them, grab the next batch. Having your application manage batching also allows you to catch sql errors, such as timeout, and then dynamically adjust the batch size and retry without ever crashing. This is the entire reason for writing your application in the first place, other wise you could just use the wizard in management studio and manually insert your files.
I highly recommend batching over other options.
#Shubham Pandey also provides a great link to SQL bulk copy info, which also has links to more info. You should definitely experiment with the bulk copy class and see if you can get additional gains with it as well.
Im simply just trying to read what there is in the batabase on to a console but i always get an exception on the conn.Open() line. Here is all the code:
SqlConnectionStringBuilder conn_string = new SqlConnectionStringBuilder();
conn_string.DataSource = "mysql14.000webhost.com"; // Server
conn_string.UserID = "a7709578_codecal";
conn_string.Password = "xxxxx";
conn_string.InitialCatalog = "a7709578_codecal"; // Database name
SqlConnection conn = new SqlConnection(conn_string.ToString());
conn.Open();
SqlCommand cmd = new SqlCommand("Select name FROM Users");
SqlDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
Console.WriteLine("{1}, {0}", reader.GetString(0), reader.GetString(1));
}
reader.Close();
conn.Close();
if (Debugger.IsAttached)
{
Console.ReadLine();
}
You need to build the connection string manually or use MySqlConnectionStringBuilder. MySql uses a different format than SQL Server and the SqlConnectionStringBuilder that you're using. You also need to use a MySQL library, SqlConnection, SqlCommand, etc are all build specifically for SQL Server.
MySQL connectors
For MySQL database you are using wrong provider. Those classes you have used in posted code are for SQL Server. Your code should look like below with MySQL provider related classes
MySqlConnectionStringBuilder conn_string = new MySqlConnectionStringBuilder();
conn_string.Server = "mysql14.000webhost.com";
conn_string.UserID = "a7709578_codecal";
conn_string.Password = "xxxxxxx";
conn_string.Database = "a7709578_codecal";
using (MySqlConnection conn = new MySqlConnection(conn_string.ToString()))
Check Related post in SO
Also to point out, you are selecting only one column from your table as can be seen
new SqlCommand("Select name FROM Users");
Whereas trying to retrieve two column value, which is not correct
Console.WriteLine("{1}, {0}", reader.GetString(0), reader.GetString(1))
000webhost free servers does not allow external connections to the server database.
You can only use your database from your PHP scripts stored on the server.
You can get data from database using PHP and it will return.So i advice to you using php from C# like api.
In a web app, I am using SQL server. However, when I try to store some bulk amount of data, it misses some of the records and does not insert them into the database. I want to know whether there is any commit statement or synchronization for the database? Data is being sent object by object using an ajax call.
Here is my code:
try
{
int surah = Convert.ToInt32(Request["surah"]);
string verse = Request["data"];
string connectionString = #"Data Source=(LocalDB)\v11.0;AttachDbFilename=C:\PROGRAM FILES (X86)\MICROSOFT SQL SERVER\MSSQL.1\MSSQL\DATA\PEACE_QURAN.MDF;Integrated Security=True";
System.Data.SqlClient.SqlConnection connection = new SqlConnection(connectionString);
string query = "insert into Ayyat_Translation_Language_old_20131209 values(null,null,"+surah+",'"+verse+"')";
SqlCommand cmd = new SqlCommand(query, connection);
connection.Open();
cmd.ExecuteNonQuery();
connection.Close();
}
catch(Exception e){
System.IO.StreamWriter file = new System.IO.StreamWriter(#"E:\Office_Work\Peace_Quran\Peace_Quran\Files\ExceptionFile.txt", true);
file.WriteLine("exception details : "+e.ToString());
file.Close();
}
As you understand, the records cannot get lost in the way. Either the INSERT statement would execute, or you would get an exception. Since neither is happening, I believe that you loose something in the request generating mechanism.
I would strongly suggest to put some logging message on each request. You will probably find out that your requests are less than you thought. This could be for a number of reasons, but since I don't know the exact mechanism calling the server side code, I cannot have an opinion.
Hope I helped!
I want to know if a multiple active result set, MARS, exists for the Microsoft's Access database? I am aware this exists for SQL Server. I tried using it with Access but it didn't work for me. I want to know how to use MARS with Access.
In short, Microsoft Access does not support multiple active result sets (MARS). It is not supported by the ODBC provider and the reason why that is not the case should be obvious if you think about it in terms of what MARS actually offers you from a performance stand point.
If you think about the most important reason for MARS to exist is if you have stored procedures executed on a SQL server that produce multiple result sets. If you have such queries you need to be able to somehow access those multiple results sets.
But in Access there is no such thing as stored procedures. If you have multiple queries you can just execute each one of them separately and get the result set for each. Hence, no need for MARS.
NOTE
In light of the comments, here's an example of how to have two data readers open at the same time:
using(var connection1 = new OdbcConnection("your connection string here"))
{
connection1.Open();
using(var connection2 = new OdbcConnection("your connection string here"))
{
connection2.Open();
using(var cmd1 = connection1.CreateCommand())
{
cmd1.CommandText = "YOU FIRST QUERY HERE";
using(var dataReader1 = cmd1.ExecuteReader())
{
while(dataReader1.Read())
{
// keep reading data from dataReader1 / connection 1
// .. at some point you may need to execute a second query
using(var cmd2 = connection2.CreateCommand())
{
cmd2.CommandText = "YOUR SECOND QUERY HERE";
// you can now execute the second query here
using(var dataReader2 = cmd2.ExecuteReader())
{
while(dataReader2.Read())
{
}
}
}
}
}
}
connection2.Close();
}
connection1.Close();
}