I'm having some trouble with an application that we're porting to Mono.
(For reference, the .NET runtime is 4.0, and the mono version is 2.6.7.)
EDIT: This problem persists on Mono 2.10.2
As part of the startup of the application, it reads data into memory. For this part I'm just using in-line SQL commands, but for some reason, I'm seeing inconsistent behaviour on Linux/Mono (when the whole thing works in Windows/.NET).
I have a query that works fine in some scenarios, but not in others.
This particular example doesn't work:
var cmd = new SqlCommand("SELECT ID, Name, VATTerritoryID, NativeCurrencyID FROM PricingZones", conn);
var reader = cmd.ExecuteReader();
var objectToLoad = new SomeObjectType();
while (reader.Read())
{
objectToLoad.Property1 = reader.GetInt32(row.GetOrdinal("ID"));
objectToLoad.Property2 = reader.GetString(row.GetOrdinal("Name"));
objectToLoad.Property3 = reader.GetInt32(row.GetOrdinal("VATTerritoryID"));
objectToLoad.Property3 = reader.GetInt32(row.GetOrdinal("NativeCurrencyID"));
}
EDIT: For comparison, here's one that does work:
var cmd = new SqlCommand("SELECT VATTerritoryID, ProductDescriptionID, VATBandID FROM VATTerritoryBandExceptions", conn);
var reader = cmd.ExecuteReader();
var someOtherObjectToLoad = new SomeOtherObjectType();
while (reader.Read())
{
someOtherObjectToLoad.Property1 = reader.GetInt32(row.GetOrdinal("VATTerritoryID"));
someOtherObjectToLoad.Property2 = reader.GetString(row.GetOrdinal("ProductDescriptionID"));
someOtherObjectToLoad.Property3 = reader.GetInt32(row.GetOrdinal("VATBandID"));
}
I had possible suspicions that there were differences down to:
Casing (Since I know this is different on windows/linux), but putting everything to lowercase hasn't solved the problem
Column names (perhaps Mono cares more about reserved words?), but it seems replacing Name with [Name] or 'Name' didn't make any different
The error I had in the first case was:
[IndexOutOfRangeException: Array index is out of range.]
at System.Data.SqlClient.SqlDataReader.GetInt32(Int32 i)
Suggesting that there is no "column1" in the returned result set.
(EDIT: updated this section a little for clarity)
Oddly, if I do this:
var cmd = new SqlCommand("SELECT ID, Name, VATTerritoryID, NativeCurrencyID FROM PricingZones", conn);
var reader = cmd.ExecuteReader();
var objectToLoad = new SomeObjectType();
while (reader.Read())
{
Console.WriteLine("First row, first column is " + row.GetValue(0));
Console.WriteLine("First row, second column is " + row.GetValue(1));
Console.WriteLine("First row, third column is " + row.GetValue(2));
Console.WriteLine("First row, fourth column is " + row.GetValue(3));
}
The output is:
First row, first column is 0
First row, second column is New
Array index is out of range.
I assume something strange is happening with the Mono framework in this case, but I can't find a relevant bug report, and I can't identify why this only happens in some cases but not others! Has anyone else had a similar experience?
EDIT: I've changed some of the statements to match those in the failing query exactly, in case there is an issue with reserved words or similar. Note that the query I'm issuing in the second case really does request four columns and seemingly only gets two very odd ones back (0 | New ).
Ok mate I managed to reproduce your problem. You have a threading problem there. That was my only idea what may be the cause of this problem and I managed to actually reproduce it.
In order to fix it you need to do the following:
Make sure every reader has a reader.Close() call after parsing the
data.
Use the following code to do thread-safe calls:
Object executeLock = new Object();
private IDataReader ExecuteThreadSafe(IDbCommand sqlCommand)
{
lock (executeLock)
{
return sqlCommand.ExecuteReader(CommandBehavior.CloseConnection);
}
}
Looks like mono has a different implementation of SQL objects than .NET. It's true that I wasn't able to reproduce it on Windows!
I would start out by trying to figure out exactly where the difference lies.
First off, you have the following code:
objectToLoad.Property1 = reader.GetInt32(row.GetOrdinal("column1"));
objectToLoad.Property2 = reader.GetString(row.GetOrdinal("column2"));
objectToLoad.Property3 = reader.GetString(row.GetOrdinal("column ")); //Is the space on the end intended?
I believe you said the first two lines work, and the third line blows up. We first need to figure out where this blows up. I'd try:
int test = row.GetOrdinal("column ");
Does test equal a valid column index? If not, that's your problem. Make sure that's the exact column name, try different casings, etc. If the index is valid, try:
object foo = reader[test];
Console.WriteLine(foo.GetType().Name);
to figure out what the data type of this column is. Perhaps there's a casting problem.
If that fails, I'd try loading the reader into a DataSet object instead so you can look at the exact schema more carefully.
If you do find a difference in behavior between Mono and the .NET Framework, the Mono team is usually very willing to fix these. I strongly recommend filing this as a bug.
Hope this helps!
Try accessing your reader in this manner:
reader["column1isastring"].ToString();
(Int32)reader["column2isInt32"];
Also, as a side note, make sure you're using the "using" directive for disposable objects. I'm not sure if Mono implements this, but it's worth a shot. What the using directive does is cleans up disposable objects as soon as you're done using them. It's very handy and makes for clean code. Quick example:
using (MySqlCommand command = new MySqlCommand("SELECT column1, column2, column FROM tablename", conn))
{
try
{
conn.Open();
using (MySqlDataReader reader = command.ExecuteReader())
{
reader.Read();
var someString = reader["column1isastring"].ToString();
var whatever = (Int32)reader["column2isInt32"];
} //reader closes and disposes here
}
catch (Exception ex)
{
//log this exception
}
finally
{
conn.Close();
}
} //conn gets closed and disposed of here
Also, if you're getting user input that goes straight to your commands, make sure you use the MySqlParameter class to keep malicious parameters from dropping tables, for example.
Mono is not .NET and there are a lot of differences especially with earlier versions. The root methodology to connect to SQL is using a TDS (tabular data stream) implementation in C# which for earlier versions of Mono (and TDS as a result) can cause a lot of problems. Almost all essential classes for SQL and data are having differences that potentially may cause exceptions. Maybe it's worthy to try with Mono 2.10+ because the Mono team improves the whole project continuously.
I am using Mono 4.0 and I have a similar problem. My research shows, that most probable reason of this problem is flawed implementation of connection pool in Mono. At least, if I switch off pooling, the problem disappears.
To switch off connection pooling you need to add the following string to your connection string: Pooling=false;
After that, creation of connection object should look like that:
var conn = new SqlConnection("Server=localhost; Database=somedb; user=user1; password=qwerty; Pooling=false;");
Related
I'm at the end of my rope on this issue. Trying to connect to a FoxPro directory hosted locally, using the Microsoft OLE DB Provider for Visual FoxPro 9.0 with the following code:
using (var con = new OleDbConnection(#"Data Source=C:\FoxDB;Provider=VFPOLEDB.1;"))
{
con.Open();
using (var cmd = new OleDbCommand("select * from order", con))
{
var dr = cmd.ExecuteReader();
while (dr.Read())
{
Debug.WriteLine(dr["ord_id"]);
}
}
}
Executing the code does not throw an exception, but there are zero rows returned. The schema is discovered and returned, as it has 72 fields present when I examine the data reader (I've done the same with data tables, data sets, data adapters, etc. All return the schema, but zero results).
Building an SSIS package to access the same table and pull into a MSSQL database results in 3,828 records being pulled in. order.dbf on disk is 884kb which seems to jive with the SSIS results I've pulled in.
I've tried adding Collation Sequence to the connection string, both Machine and General to no effect.
Please tell me there's something obvious I'm missing!
UPDATE: So apparently there's something that I just don't understand about FoxPro. If I change the command type to CommandType.TableDirect and switch the command text to just be order, it returns all the rows. Any insight would be appreciated.
I think the problem is not with Foxpro, testing with the same exact code, I can get the result (created a free test table in c:\FoxDb). Ensure that you are using the latest VFPOLEDB driver. It looks like the problem lies within your table.
BTW, order is a keyword, you better write it as:
"select * from [order]"
although it would work as you did (VFP is forgiving in that regard). The problem might also lie in collation sequence you have used (I never use collation sequences other than machine, they are problematic in Turkish, I expect the same in some other languages).
Issue
I am refactoring a project written by a developer in C# .NET 4.5 project which contains a dataset.xsd. I was asked to increase the efficiency of the project. Basically, the problem is because the tableadapter fills my dataset with data from the entire table in the database. This is several million rows of data.
Problem
I had a line that is basically doing this.
this.customersTableAdapter.Fill(this.derpDataset.Customers);
So I decided to do something like this (not wanting to change the .xsd):
//This references a class written in order to get the database manually instead of using the .xsd
SqlConnection sqlConnection = DB_Connection.OpenDatabase();
SqlCommand sqlCommand = new SqlCommand("SELECT * FROM COMDB WHERE ID = " + ID.ToString() + ";", sqlConnection);
this.customersTableAdapter.Adapter.SelectCommand = sqlCommand;
this.customersTableAdapter.Adapter.Fill(this.derpDataset.Customers);
Code
Basically, the .xsd has a bunch of auto generated stuff, but I just needed a way step around it and fill with a much more optimized query.
[global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
[global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Data.Design.TypedDataSetGenerator", "4.0.0.0")]
private void InitCommandCollection() {
this._commandCollection = new global::System.Data.SqlClient.SqlCommand[1];
this._commandCollection[0] = new global::System.Data.SqlClient.SqlCommand();
this._commandCollection[0].Connection = this.Connection;
this._commandCollection[0].CommandText = "SELECT customerID, name, SSN FROM dbo.tblActualCustomerValueFloat";
this._commandCollection[0].CommandType = global::System.Data.CommandType.Text;
}
Here is the autogenerated Fill function.
public virtual int Fill(customersTableAdapter dataTable) {
this.Adapter.SelectCommand = this.CommandCollection[0];
if ((this.ClearBeforeFill == true)) {
dataTable.Clear();
}
int returnValue = this.Adapter.Fill(dataTable);
return returnValue;
}
Error
This is frustrating because I get a new error which makes no sense to me. It says:
A first chance exception of type 'System.Data.ConstraintException' occurred in System.Data.dll
The exception snapshot states the following:
{"Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints."}
I will not use the following fix because it breaks the next line in my code which is.
CustomerDataSet.Runs_CustomerRow runRow = derpDataset.Runs_Customer.First(t => t.ID == ID);
Attempts
Tried changing .xsd and caused the whole thing to crash.
This convinced me not to mess with the .xsd at all. Decided to search here and found here.
Read the documentation MSDN provided here and here.
Googled, but found no similar issue. Maybe this, but not what his fix was?
Code provided above
I know what I want to do and the idea is simple, but I have found the application extremely hard given the reliance of the project on .xsd.
I would really appreciate any assistance and comments on this issue.
I figured out the answer. First, I created a way to get a more effective error message. I wrote the function that MSDN suggested. You can write a function to get errors from a dataset. I then put it in try/catch and read the output on console.
It turns out I was querying the wrong database and getting nulls. This was easy to do because the schema names were so similar, but the error message wasn't useful until I could see what the values were in my columns.
You can find the link to the error function I used here.
I'm currently evaluating Oracle's ODP.NET DataProvider and I ran into a problem that popped up in one of our testcases: When the same command text is executed with different parameter types, the parameter type of the first executed command is used in all following commands.
Take for example the following code:
const int sampleInt32 = 1234567890;
const string sampleNvarchar = "someTestString";
const string sqlCommandtext = "SELECT :PARAM PARAM FROM DUAL";
using (OracleConnection connection = new OracleConnection(builder.ConnectionString))
{
connection.Open();
//Test 1 - Int 32
using (OracleCommand commandInt32 = connection.CreateCommand())
{
commandInt32.CommandText = sqlCommandtext;
commandInt32.Parameters.Add("PARAM", OracleDbType.Int32, sampleInt32, ParameterDirection.Input);
using (IDataReader reader = commandInt32.ExecuteReader())
{
while (reader.Read())
{
int resultInt32 = (int)reader.GetDecimal(0);
Assert.AreEqual(sampleInt32, resultInt32);
}
}
}
//Test 2 - NVarchar
using (OracleCommand commandNVarchar = connection.CreateCommand())
{
commandNVarchar.CommandText = sqlCommandtext;
commandNVarchar.Parameters.Add("PARAM", OracleDbType.NVarchar2, sampleNvarchar, ParameterDirection.Input);
using (IDataReader reader = commandNVarchar.ExecuteReader())
{
while (reader.Read())
{
string resultNVarchar = reader.GetString(0);
Assert.AreEqual(sampleNvarchar, resultNVarchar);
}
}
}
}
If commandInt32 is executed before commandNVarchar, execution of commandNVarchar fails with ORA-01722 - Invalid number. If the order is switched so commandNVarchar is executed first, it fails with "Specified cast is not valid" on reader.GetDecimal.
So far I've tried setting StatementCacheSize=0; Pooling=false; StatementCachePurge=true as ConnectionString parameters but I can't get this to work.
Is there anything I'm missing or are there any other options worth trying?
EDIT: Maybe some background on why this is needed/required: We don't use ODP or any other Dataprovider directly in our application (or at least: we're on our way to reach this goal), there's an DataLayer in between that performs database/provider specific optimiziations and monitoring of connection health,...
In this Layer for example StoredProcedures can be called, having the option of parameter type tuning. Some of our procedures have Clobs as Parameter types, as sometimes the value can be longer than x characters, but most likely it will be shorter.
So before executing via ExecuteNonQuery with ArrayBindCount set to y, parameter values are checked if Clob can be passed as varchar (Nclob as Nvarchar). "Rebinding" reduces the time to execute 2500 records from about 500ms to 200ms at the cost of losing a few ms checking string lengths. And this rebinding can only be done if the parameter type can be changed. Without this option we would need to execute it as Clob everytime, taking the performance hit.
To my understanding, parameter binding is unsupported in a SELECT list. I was so surprised that this worked at all that I had to run your code to see it with my own eyes. I believe that for the client to allow that SQL statement to execute at all is a bug.
Regardless, I inserted the following line between the test cases to get them both to work:
connection.PurgeStatementCache();
However, this only seems to work with the Managed Client (I've tried it with version 4.112.3.60). The regular client still fails as you describe.
Two things. When used as connection string parameters, the configuration variables need to have spaces, ie
Statement Cache Size=0;
The format you are using can be used directly in the config though:
http://docs.oracle.com/html/E10927_01/featConfig.htm#CJABCACG
http://docs.oracle.com/html/E10927_01/featOraCommand.htm#CIHCAFIG
You could use that same configuration section to enable tracing - comparing the traces might give you an idea of what is happening.
I believe PurgeStatementCache (not sure StatementCachePurge exists) is a runtime command, ie
connection.PurgeStatementCache
Metadata Pooling = false;
Our application is using Oracle 12c with ODP.Net Managed Provider
When using OracleCommandBuilder.DeriveParameters() we were always seeing the same parameters return from the stored procedure despite adding/ removing/ updating parameters. We would only see the changes after restarting the IIS process.
The only solution that worked was setting Metadata Pooling = false; in the Oracle connection string
We had no success with the following which have been mentioned here or on Oracle's forums:
connection.PurgeStatementCache();
Statement Cache Size=0;
Pooling = false;
What version of Oracle are you connecting to? This may be a bind variable peaking (or lack thereof) issue. The feature was introduced in 9i but had some issues all the way thru 10. You could try executing the following to see if you can reproduce the problem without ODP.net:
var param varchar2(255)
exec :param:='TEST';
select :param FROM DUAL;
change the type on "param" from varchar2 to number and change the value and reexecute to see what happens.
You could also try executing the command under a different connection instead of a shared one.
In the end, you could simply rename the bind variable in the statement, relative to the type (ie :paramNum or :paramString). The name you give the the parameter on the .net side is irrelevant unless cmd.BindByName is set to true. By default it is false, and variables are bound in the order they are added.
Working on a small project with VS2010 and SQLEXPRESS.
I have the following table with some data I entered:
Table fields
When I execute this:
using (SqlConnection conn = new SqlConnection(connString)) {
conn.Open();
SqlCommand cmd = new SqlCommand("SELECT * FROM ACCOUNTS", conn);
using (SqlDataReader reader = cmd.ExecuteReader()) {
while (reader.Read()) {
accountList.Add(new Account((int)reader["id"], (float)reader["number"], (AccountType)reader["type"], (string)reader["name"], (float)reader["balance"], (float)reader["maxdebit"], (int)reader["userId"]));
}//Not tested
} // Dispose Reader
//do something here
} // Dispose implicitly called
The debugger show me the following:
Debugger output
He only gets the first 4 columns, for some reason. I've never seen this behavior before and the code to get it seems good. Anyone got any clues for me?
EDIT: Problem solved.
The problem turned out to be an InvalidCastingException. For some reason. The issue with only 4 items of the array showing up was not really a problem as it turns out. The FieldCount variable indicated 7 fields, which is exactly what's in the table. So those were all retrieved.
My table data looks like this:
Ik0Ap.png (add the imgur plz... no images and only 2 hyperlinks when under 10 reputation points. I edited to show this has been solved so it's me submitting... so I have this problem)
Maximum of 2 hyperlinks under 10 points...
It clearly says "float", so I did a cast to float. But according to the VS debugger, it's a double. So the casting was invalid. sigh
So that was the main problem. Why only 4 pieces of the array were shown is still a mystery to me... Maybe a VS2010 setting to prevent arrays with thousands of objects from being printed to the debugger?
Try specifying the columns explicitly, as in:
SELECT id, name, number, type, balance, ... etc.
I have a block of code intended to pull text descriptions from a database table and save them to a text file. It looks like this (C# .NET):
OdbcCommand getItemsCommand = new OdbcCommand("SELECT ID FROM ITEMS", databaseConnection);
OdbcDataReader getItemsReader = getItemsCommand.ExecuteReader();
OdbcCommand getDescriptionCommand = new OdbcCommand("SELECT ITEMDESCRIPTION FROM ITEMS WHERE ID = ?", databaseConnection);
getDescriptionCommand.Prepare();
while (getItemsReader.Read())
{
long id = getItemsReader.GetInt64(0);
String outputPath = "c:\\text\\" + id + ".txt";
if (!File.Exists(outputPath))
{
getDescriptionCommand.Parameters.Clear();
getDescriptionCommand.Parameters.AddWithValue("id", id);
String description = (String)getDescriptionCommand.ExecuteScalar();
StreamWriter outputWriter = new StreamWriter(outputPath);
outputWriter.Write(description);
outputWriter.Close();
}
}
getItemsReader.Close();
This code has successfully saved a portion of the data to .txt files, but for many rows, an AccessViolationException is thrown on the following line:
String description = (String)getDescriptionCommand.ExecuteScalar();
The Exception text is "Attempted to read or write protected memory. This is often an indication that other memory is corrupt".
The program will usually throw the exception on the same rows of the table, but it doesn't appear to be 100% consistent. Sometimes data that had thrown the exception in the past will suddenly work.
Some people are undoubtedly wondering why I didn't just SELECT ID, ITEMDESCRIPTION FROM ITEMS in the getItemsCommand and skip the second query. Actually, I did it that way initially, and I was encountering the same error with getItemsCommand.GetString(). I was afraid that perhaps the dataset was taking up too much memory and maybe that was causing the error. So I decided to try this method to see if it would help. It didn't. Does anyone know why this might be happening?
By the way, ID is an INT and ITEMDESCRIPTION is a VARCHAR(32000) column. If it makes any difference, the database is Borland Interbase 6.0 (Ick!)
EDIT: I gave the wrong line when describing where the exception was being thrown!! ARGH!! Fixed now. Also, I've tried the things suggested so far, but they didn't help. However, I found that only very old records in the database were causing this error, which is strange. If I change the query to only pull records inserted in the last 5 years, there are no problems. Someone suggested to me this might be an encoding conversion problem or something like that?
Update: Solved it. The problem turned out to be a bug in the ODBC driver for our not-very-reliable database software. A workaround with other drivers fixed the problem.
It could be a bug in the ODBC driver you are using. What driver is it? What is your connection string?
A shot in the dark here...
Try executing your reader, saving your result (maybe in an array or list), and making sure the reader is closed before executing or preparing the next command. You may even want to go extreme and put your getItemsCommand construction inside a using block so you know that it has no resources open before you execute your next command...