I have a C# .NET program running an ETL which connects to a DB2 database. Sometimes this database is down, so I'd like to do a health check at the beginning of the application to see if the database is available, without actually calling any stored procedures or pushing any data. Here's an example of the code I'm using now:
OdbcConnection myODBCConnection = new OdbcConnection("DSN=DB2AA;UID=ABCD;PWD=1234;");
OdbcCommand myODBCCommand = new OdbcCommand();
myODBCCommand.CommandType = CommandType.StoredProcedure;
myODBCCommand.CommandText = "{CALL SYSPROC.ABC001(?, ?)}";
myODBCCommand.Parameters.Add("INPUT", OdbcType.VarChar, 500);
myODBCCommand.Parameters["INPUT"] = myString
myODBCCommand.Connection = myODBCConnection
myODBCConnection.Open();
OdbcTransaction myTrans;
myTrans = myODBCConnection.BeginTransaction();
myODBCCommand.Transaction = myTrans;
myTrans.Commit();
myODBCCommand.ExecuteNonQuery();
myODBCConnection.Close();
What's the best way to test this connection without actually pushing any data?
You can simply run some innoccuous select query to check to see if the db is available.
You can try to do something as simple as:
Select 1
Or
Select getdate()
Those simple queries don't even touch any tables but will return only if the rdbms is running.
Note: those examples are for sql server but might work for db2. I haven't had to do a live check on a db2 yet though the similar concept should be doable.
Note 2: after a closer look at your code, all you should really have/need to do is check for success of your odbc connection's .Open() call.
Related
Assume we have a stored procedure like so
CREATE PROCEDURE CopyValue(IN src INT, OUT dest INT)
BEGIN
SET dest = src;
END
I want to call this from a .net app (assume connection etc created successfully)
var sql = "call CopyValue(100, #destValue); select #destValue as Results;";
The string in the above statement works perfectly well when called in MySql Workbench.
However this - obviously - fails with "MySqlException: Parameter '#destValue' must be defined" when executed on a MySqlCommand object in .net
How do I arrange this statement so I can capture an output parameter from an existing procedure?
NB: I'm running against MySql 5.6, which I can't upgrade at this time.
NB Calling the procedure directly with CommandType.StoredProcedure goes against company guidelines.
By default, user-defined variables aren't allowed in SQL statements by MySQL Connector/NET. You can relax this restriction by adding AllowUserVariables=true; to your connection string. No modifications to your SQL or how you're executing the MySqlCommand should be necessary.
For information about why this is the default, you can read the research on this MySqlConnector issue (which also has the same default behaviour, but a much better error message that will tell you how to solve the problem): https://github.com/mysql-net/MySqlConnector/issues/194
A colleague (who wishes to remain anonymous) has answered this perfectly. Essentially put backticks ` after the # and at the end of the variable name e.g.
#`MyParam`
A fully working example.
static void Main(string[] args)
{
using var con = new MySql.Data.MySqlClient.MySqlConnection("Data Source=localhost; User Id=...;Password=...;Initial Catalog=...");
con.Open();
using var cmd = con.CreateCommand();
cmd.CommandText = "call CopyValue2(100, #`v2`); select #`v2` as Results;";
using var reader = cmd.ExecuteReader();
if (reader.Read())
Console.WriteLine($"Copied Value {reader.GetInt64(0)}");
}
Thanks OG :)
I am trying to setup my .NET 4.7.1 program that is connecting to a MySQL database 8.0 to use the minimum privileges to run.
The .NET program is using MySql.Data to make connection. The minimum right for a user to execute a stored procedure is typically only EXECUTE privilege. This works fine from MySQL workbench or command line.
Upon running the .NET program this does return the following exception:
System.Data.SqlTypes.SqlNullValueException: 'Data is Null. This method or property cannot be called on Null values.'
To make it easy, I have create a very small demo program to demonstrate the issue.
Setup of the database:
CREATE DATABASE Spike;
CREATE PROCEDURE TestAccess()
BEGIN
END;
CREATE USER Spike#localhost IDENTIFIED WITH mysql_native_password BY 'sample';
GRANT EXECUTE ON PROCEDURE `TestAccess` TO Spike#localhost;
Setup program code:
static void Main(string[] args)
{
using (MySqlConnection conn = new MySqlConnection("Server=localhost;Database=Spike;uid=Spike;pwd=sample"))
{
conn.Open();
Console.WriteLine("Connection open");
MySqlCommand cmd = new MySqlCommand();
cmd.Connection = conn;
cmd.CommandText = "TestAccess";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.ExecuteNonQuery();
Console.WriteLine("Query executed");
}
Console.ReadKey();
}
The crash happens at the line cmd.ExecuteNonQuery();
The stack from the crash is interesting, since it seems to indicate that the information_schema is queried. When logging all statements I can see that the last statement before the exception is:
SELECT * FROM information_schema.routines WHERE 1=1 AND routine_schema LIKE 'Spike' AND routine_name LIKE 'TestAccess'
I cannot grant different rights on information_schema, but I could give more rights on the stored procedure to make more information visible in the routines table, this feels wrong however. Simple tests with granting CREATE and ALTER access also did not work.
Is there something else I can do, without granting too much privileges?
This appears to be a bug in Connector/NET, similar to bug 75301 but a little different. When it's trying to determine parameter metadata for the procedure, it first creates a MySqlSchemaCollection named Procedures with all metadata about the procedure. (This is the SELECT * FROM information_schema.routines WHERE 1=1 AND routine_schema LIKE 'Spike' AND routine_name LIKE 'TestAccess' query you see in your log.)
The Spike user account doesn't have permission to read the ROUTINE_DEFINITION column, so it is NULL. Connector/NET expects this field to be non-NULL and throws a SqlNullValueException exception trying to read it.
There are two workarounds:
1) The first, which you've discovered, is to set CheckParameters=False in your connection string. This will disable retrieval of stored procedure metadata (avoiding the crash), but may lead to harder-to-debug problems calling other stored procedures if you don't get the order and type of parameters exactly right. (Connector/NET can no longer map them for you using the metadata.)
2) Switch to a different ADO.NET MySQL library that doesn't have this bug: MySqlConnector on NuGet. It's highly compatible with Connector/NET, performs faster, and fixes a lot of known issues.
I found an answer with which I am quite pleased. It is changing the connection string by adding CheckParameters=false:
using (MySqlConnection conn = new MySqlConnection("Server=localhost;Database=Spike;uid=Spike;pwd=sample;CheckParameters=false"))
This disables parameter checking, and thereby information_schema queries.
I'm working on a pretty special, legacy project where I need to build an app for PDA devices under Windows Mobile 6.5. The devices have a local database (SQL Server CE) which we are supposed to sync with a remote database (Microsoft Access) whenever they are docked and have network access.
So the local database using SQL Server CE works fine, but I can’t figure out a way to sync it to the Access database properly.
I read that ODBC and OLEDB are unsupported under Windows Mobile 6.5, most ressources I find are obsolete or have empty links, and the only way I found was to export the local database relevant tables in XML in the hope to build a VBA component for Access to import them properly. (and figure out backwards sync).
Update on the project and new questions
First of all, thanks to everyone who provided an useful answer, and to #josef who saved me a lot of time with the auto path on this thread.
So a remote SQL Server is a no go for security reasons (client is paranoid about security and won't provide me a server). So I'm tied to SQL Server CE on the PDA and Access on the computer.
As for the sync:
The exportation is fine: I'm using multiple dataAdapters and a WriteXML method to generate XML files transmitted by FTP when the device is plugged back in. Those files are then automatically imported into the Access database. (see code at the end).
My problem is on the importation: I can acquire data through XML readers from an Access-generated file. This data is then inserted in a dataset (In fact, I can even print the data on the PDA screen) but I can't figure out a way to do an "UPSERT" on the PDA's database. So I need a creative way to update/insert the data to the tables if they already contains data with the same id.
I tried two methods, with SQL errors (from what I understood it's SQL Server CE doesn't handle stored procedures or T-SQL). Example with a simple query that is supposed to update the "available" flag of some storage spots:
try
{
SqlCeDataAdapter dataAdapter = new SqlCeDataAdapter();
DataSet xmlDataSet = new DataSet();
xmlDataSet.ReadXml(localPath +#"\import.xml");
dataGrid1.DataSource = xmlDataSet.Tables[1];
_conn.Open();
int i = 0;
for (i = 0; i <= xmlDataSet.Tables[1].Rows.Count - 1; i++)
{
spot = xmlDataSet.Tables[1].Rows[i].ItemArray[0].ToString();
is_available = Convert.ToBoolean(xmlDataSet.Tables[1].Rows[i].ItemArray[1]);
SqlCeCommand importSpotCmd = new SqlCeCommand(#"
IF EXISTS (SELECT spot FROM spots WHERE spot=#spot)
BEGIN
UPDATE spots SET available=#available
END
ELSE
BEGIN
INSERT INTO spots(spot, available)
VALUES(#spot, #available)
END", _conn);
importSpotCmd.Parameters.Add("#spot", spot);
importSpotCmd.Parameters.Add("#available", is_available);
dataAdapter.InsertCommand = importSpotCmd;
dataAdapter.InsertCommand.ExecuteNonQuery();
}
_conn.Close();
}
catch (SqlCeException sql_ex)
{
MessageBox.Show("SQL database error: " + sql_ex.Message);
}
I also tried this query, same problem SQL server ce apparently don't handle ON DUPLICATE KEY (I think it's MySQL specific).
INSERT INTO spots (spot, available)
VALUES(#spot, #available)
ON DUPLICATE KEY UPDATE spots SET available=#available
The code of the export method, fixed so it works fine but still relevant for anybody who wants to know:
private void exportBtn_Click(object sender, EventArgs e)
{
const string sqlQuery = "SELECT * FROM storage";
const string sqlQuery2 = "SELECT * FROM spots";
string autoPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); //get the current execution directory
using (SqlCeConnection _conn = new SqlCeConnection(_connString))
{
try
{
SqlCeDataAdapter dataAdapter1 = new SqlCeDataAdapter(sqlQuery, _conn);
SqlCeDataAdapter dataAdapter2 = new SqlCeDataAdapter(sqlQuery2, _conn);
_conn.Open();
DataSet ds = new DataSet("SQLExport");
dataAdapter1.Fill(ds, "stock");
dataAdapter2.Fill(ds, "spots");
ds.WriteXml(autoPath + #"\export.xml");
}
catch (SqlCeException sql_ex)
{
MessageBox.Show("SQL database error: " + sql_ex.Message);
}
}
}
As Access is more or less a stand-alone DB solution I strongly recommend to go with a full flavored SQL Server plus IIS to setup a Merge Replication synchronisation between the SQL CE data and the SQL Server data.
This is described with full sample code and setup in the book "Programming the .Net Compact Framework" by Paul Yao and David Durant (chapter 8, Synchronizing Mobile Data).
For a working sync, all changes to defined tables and data on the server and the CE device must be tracked (done via GUIDs, unique numbers) with there timestamps and a conflict handling has to be defined.
If the data is never changed by other means on the server, you may simply track Device side changes only and then push them to the Access database. This could be done by another app that does Buld Updates like described here.
If you do not want to go the expensive way to SQL Server, there are cheaper solutions with free SQLite (available for CE and Compact Framework too) and a commercial Sync tool for SQLite to MSAccess like DBSync.
If you are experienced, you may create your own SQLite to MS ACCESS sync tool.
I am building a query using ODBC command object in .Net with multiple parameters being passed in. When executing the query against SQL Anywhere, I get the following error. (The same code works against SQL Server).
[System.Data.Odbc.OdbcException] = {"ERROR [07002] [Sybase][ODBC Driver][SQL Anywhere]Not enough values for host variables"}
The command object has the same number of parameters added as the place holders ('?') in the query. Following is a simple query and C# code that fails the test.
C# code to populate the host variables
String queryText= #"DECLARE #loanuseraddress varchar(40), #loanid decimal
Set #loanid = ?
Set #loanuseraddress = ?
select * from loan_assignments where loan_id = #loanid"
OdbcConnection connection = new OdbcConnection(request.ConnectionString);
OdbcCommand command;
command = new OdbcCommand(queryText, connection);
OdbcParameter param1 = new OdbcParameter("#loanid", OdbcType.Decimal);
param1.Value = request.Loan.LoanNumber;
command.Parameters.Add(param1);
OdbcParameter param2 = new OdbcParameter("#loanuseremployer", dbcType.VarChar);
param2.Value = appraisalCompanyUpdate.LoanUserEmployer;
if (param2.Value == null)
param2.Value = DBNull.Value;
command.Parameters.Add(param2);
connection.Open();
OdbcDataReader rows = command.ExecuteReader();
I fixed this by checking for nulls. When you try to pass a null parameter to Sybase, that's the error you get (at least for me). Have a feeling LoanId is null at some point.
Edit After doing a little more research, I think you can also get this error when you try multiple insert / deletes / updates through the Sybase ODBC Connection in .Net. I don't think this is supported and MSDN seems to say it's vendor specific.
"Insufficient host variables" can also mean something else but it's applicable to the OP:
one of the other causes could be that you have a set of declared variables different from the set your SQL statement is using.
E.g. this could be a typo, or you could have copied in SQL from Visual Studio that was used to fill a dataset table using parameters (like :parm) but in doing so you forgot to declare it (as #parm) in your stored proc or begin/end block.
I want to write a code that transfers data from on server to my SQL Server. Before I put the data in, I want to delete the current data. Then put the data from one to the other. How do I do that. This is snippets from the code I have so far.
string SQL = ConfigurationManager.ConnectionStrings["SQLServer"].ToString();
string OLD = ConfigurationManager.ConnectionStrings["Server"].ToString();
SqlConnection SQLconn = new SqlConnection(SQL);
string SQLstatement = "DELETE * FROM Data";
SqlCommand SQLcomm = new SqlCommand(SQLstatement, SQLconn);
SQLconn.Open();
OdbcConnection conn = new OdbcConnection(OLD);
string statement = "SELECT * FROM BILL.TRANSACTIONS ";
statement += "WHERE (TRANSACTION='NEW') ";
OdbcCommand comm = new OdbcCommand(statement, conn);
comm.CommandTimeout = 0;
conn.Open();
SqlDataReader myDataReader = SQLcomm.ExecuteReader();
while (myDataReader.Read())
{
//...
}
SQLconn.Close();
SQLconn.Dispose();
Depending on which version of SQL Server you are using, the standard solution here is to use either DTS (2000 and before) or SSIS (2005 and on). You can turn it into an executable if you need to, schedule it straight from SQL Server, or run it manually. Both tools are fairly robust (although SSIS much more so) with methods to clear existing data, rollback in case of errors, transform data if necessary, write out exceptions, etc.
If at all possible I'd try and do it all in SQL Server. You can create a linked server to your other database server. Then simply use T-SQL to copy the data across - it would look something like...
INSERT INTO new_server_table (field1, field2)
SELECT x, y
FROM mylinkedserver.myolddatabase.myoldtable
If you need to do this on a regular basis or clear out the data first you can do this as part of a scheduled task using the SQL Agent.
If you only need to import the data once, and you have a lot of data, why not use the "BULK INSERT" command? Link
T-SQl allows you to insert data from a select query. It would look something like this:
insert into Foo
select * from Bar;
As long as the field types align this will work - otherwise you will have to massage the data from Bar to fit the fields from Foo.
When you need to do this once, take a look at the database publishing wizard (just google) and generate a script which does everything.