I have some C# code that dynamically generates an SQL query and executes it via IDbCommand.ExecuteScalar(). This works fine; there's exactly one result that matches my query in the DB, and that result is always returned.
But just recently, as the first step in a refactoring to support multiple matches in the DB, I replaced the call to ExecuteScalar() with one to ExecuteReader(). Everything else in the setup and DB access is the same. But the returned IDataReader contains no data, and throws InvalidOperationExceptions whenever I try to get data out of it.
I know the data's still there; everything works fine when I switch back to ExecuteScalar(). How is this possible?
Make sure that you are calling the Read() method on the IDataReader that is returned by ExecuteReader() before trying to access it. Calling Read() will advance the reader onto the first (and in your case only) row of the resultset. If you do not call Read() before accessing the IDataReader, you will get an InvalidOperationException when you try to access its data - as you are experiencing.
Is this to do with having multiple IDataReaders open on the same connection?
Because you wont get that issue with ExecuteScalar(), but once you start using ExecuteReader() you need to make sure all previous DataReaders on the same connection are closed (e.g. by using a 'using' block)
What is the error message that you get with the InvalidOperationException?
Related
I have a scenario where I am running multiple SQL queries over a set period of time. Throughout this scenario, there is a chance an Update statement will be running at the same time as the Select statement, affecting the same table. When this happens, my SqlDataReader object runs the Select query and returns no rows. However, immediately retrying the Select query will result in receiving the correct data. The Select statement should realistically never return no data.
Although I can retry the query after the failed read to get the results, I would like to avoid this collision in the first place, or at least have a way to differentiate between actually reading no rows and this collision error occurring. When examining the SqlDataReader object, the only different property that tells me a read failed, is that HasRows is set to false, which is not specific enough for what I am looking for here. Additionally, attempting to read just returns false and does not throw an error potentially saying why the read failed.
So far I have tried putting locks on both the Select and Update queries but I've had no such luck. Ideally the Update statement would have a lock that would queue, not block, the Select queries from running until after the transaction completes. I have tried a few different lock variations, but one example I tried was this:
update <Tablename> with (TABLOCKX, HOLDLOCK) set ...
This does not stop the collision as I expected, so this is where I am stuck.
Is the best method here to just retry if I know there never should be an empty read, or is there perhaps a better method as I am suggesting using the SQL locks?
You could take a look at Mutex . You can use it to make sure that only one instance/query is allowed at the same time.
MSDN - Mutex
I'm writing a customized, simple Web interface for Oracle DB, using ASP.NET, a Web API project in C#, and Oracle.DataAccess (ODP.NET). This is an educational project which I am designing for an extra project for a college course. There's several reasons for me designing this project, but the upshot is that using Oracle-provided tools (SQL Developer, Enterprise Manage Express, etc.) are not suitable for the task at hand.
I have an API call that can accept a query string, execute it against the DBMS and return the DBMS's output as JSON data, along with some additional return data. This has been sufficient for simple SELECT queries and other basic DDL/DML queries. However, now we're branching into PL/SQL.
For example, the most basic PL/SQL HELLO WORLD program that we'd execute looks like:
BEGIN
DBMS_OUTPUT.PUT_LINE('Hello World');
END;
When I feed this query into my C# API, it does execute successfully. However, I want to be able to retrieve the output of the DBMS_OUTPUT.PUT_LINE call(s).
This question has been addressed before and I have looked into a few of the solutions, and came down on one involving a piece of code which calls the following PL/SQL on the database:
BEGIN
Dbms_output.get_line(:line, :status);
END;
The C# code obviously creates and adds the correct parameter objects to the request before sending it. I plan to call this function repeatedly until a NULL value comes back, indicating the end of output. This data would then be added to the JSON object returned by the API so that the Web interface can display the output. However, this function never returns any lines of output.
My hunch (I'm still learning Oracle myself, so not sure) is that either the server isn't actually outputting the data, or that the buffer is flushed after the PL/SQL anonymous procedure (the Hello World) program finishes.
It was also suggested to add set serveroutput on; to the PL/SQL query but this did not work: it produced the error ORA-00922: missing or invalid option.
Here is the actual C# code being used to retrieve a line of output from the DBMS_OUTPUT buffer:
private string GetDbmsOutputLine(OracleConnection conn)
{
OracleCommand command = new OracleCommand
{
CommandText = "begin dbms_output.get_line(:line, :status); end;",
CommandType = CommandType.Text,
Connection = conn,
};
OracleParameter lineParameter = new OracleParameter("line",
OracleDbType.Varchar2);
lineParameter.Size = 32000;
lineParameter.Direction = ParameterDirection.Output;
command.Parameters.Add(lineParameter);
OracleParameter statusParameter = new OracleParameter("status",
OracleDbType.Int32);
statusParameter.Direction = ParameterDirection.Output;
command.Parameters.Add(statusParameter);
command.ExecuteNonQuery();
if (command.Parameters["line"].Value is DBNull)
return null;
string line = command.Parameters["line"].Value as string;
return line;
}
Edit: I tried manually calling the following procedure prior to executing the user's code: BEGIN DBMS_OUTPUT.ENABLE(32768); END;. This executes without error but after doing so the later calls to DBMS_OUTPUT.GET_LINE still return null.
It looks like what may be happening is that each time I execute a new query to the database, even though it's on the same connection, that the DBMS_OUTPUT buffer is being cleared. I am not sure if this is the case, but it seems to be - nothing else would readily explain the lack of data in the buffer.
Still searching for a way to handle this...
Points to keep in mind:
This is an academic project for student training and development; hence, it is not expected that this mini-application be "production-ready" in any way. Allowing users to execute raw queries posted via the Web obviously leads to all sorts of security risks - which is why this would never be put into an actual production scenario.
I currently open a connection and maintain it throughout a single API call by passing it into each OracleCommand object I create. This, in theory, should mean that the buffer is maintained, but it doesn't appear to be the case. Either the data I write is not making it to the buffer in the first place, or the buffer is flushed each time an OracleCommand object is actually executed against the database connection.
With the caveat that in reality you'd never write code that expects that anyone will ever see data that you attempt to write to the dbms_output...
Within a session, you'd need to call dbms_output.enable that allocates the buffer that is written to by dbms_output. Depending on the Oracle version, you may be able to pass in a null to indicate that you want an unlimited buffer size. In older versions, you'd need to allocate a fixed buffer size (and you'd get an error if you try to write too much data to the buffer). Then you'd call the procedure that calls dbms_output.put[_line]. Finally, you'd be able to call dbms_output.get[_line]. Note that all three things have to happen in the context of a single session. Each session has a separate dbms_output buffer (or no dbms_output buffer).
I am using SQLite-net for accessing an SQLite database file in a WinRT app. I don't have any problems reading from the database using the ExecuteQuery (I actually use the modified version from https://github.com/praeclarum/sqlite-net/issues/82 as I don't use table mappings and want the results as dictionary which calls ExecuteDeferredQuery underneath).
When I try to insert records into my database using ExecuteNonQuery, I am getting an exception with the message "CannotOpen".
Well, just a few lines above, I can read from the database successfully. I double checked the file permissions to the sqlite database file and gave everyone on the computer full control to the file to avoid any file permission issues, but the result is the same, still getting "CannotOpen".
I just tried to do a select statement with ExecuteNonQuery (just to see if it works"), I still get an exception, this time saying "Row" as the exception message.
I tried to execute my insert with ExecuteQuery to see what happens, no exception is thrown, everything seems OK, but no rows are inserted into my database.
Well, that may be explainable as ExecuteQuery is designed for querying, not inserting, but what could be the reason for ExecuteNonQuery throwing exceptions?
Here is my code (removed actual table names and parameters with generic ones for privacy):
SQLiteCommand cmd = conn.CreateCommand("insert into mytable(myfields...) values (?,?,?,?,?,?,?)", my parameters...);
cmd.ExecuteNonQuery(); //throws SQLiteException
However this code doesn't throw exception:
SQLiteCommand cmd = conn.CreateCommand("select * from mytable where some condition...", some parameters...);
var result = cmd.ExecuteToDictionary(); //renamed ExecuteQuery method from https://github.com/praeclarum/sqlite-net/issues/82
UPDATE: I've further tracked the issue down to something even simpler (and weird). This code is the very first call to SQLite framework after initialization of the connection. This very code, when executed, throws an exception in the fourth line:
SQLiteCommand cmd = conn.CreateCommand("select * from mytable");
cmd.ExecuteNonQuery();
cmd = conn.CreateCommand("select * from mytable"); //yes, the same simple query as above
cmd.ExecuteNonQuery();//getting error
UPDATE 2: If I call ExecuteToDictionary instead of ExecuteNonQuery, it works.
UPDATE 3: If I try a direct query (from the conn object such as conn.Execute("query...")) before all these calls it fails. If it's an insert query, I get CannotOpen error, if it's a select query, I get a Row error.
Why am I getting an exception on the second call to ExecuteNonQuery?
Why am I getting a different error message "Row" when I try SELECT with ExecuteNonQuery? And lastly, why are these exceptions so user-unfriendly?
Found out the answer. The SQLite file was in a directory that didn't have write access (the file DID have all the access in file properties, but I think it's a WinRT security model issue as the file was outside the sandbox of WinRT's storage folders. I could read the file, but not write. Thanks to SQLite-net's extremely helpful exception messages such as "Row" and "CannotOpen", without giving any real details about the problem, it took me days to realize that it was a file access issue rather than an SQLite configuration/query issue.
If anyone has any similar problems in the future, first, check that the SQLite database is in the storage directory of the WinRT app.
try to close and open the connection object before executing any other operations
I have a SP I want to execute and save the groos result aside (in a class field).
Later on I want to acquire the values of some columns for some rows from this result.
What returned types are possible? Which one is the most sutiable for my goal?
I know there are DataSet, DataReader, resultSet. what else?
What is the main difference between them ?
If you want to store the results and use them later (as you have written), you may use the heavy data sets or fill the lightweight lists with custom container types via the data reader.
Or in case you want to consume the results immediately, go on with the data reader.
Result set is the old VB6 class AFAIK or the current Java interface.
The traditional way to get data is by using the the classes in System.Data.SqlClient namespace. You can use the DataReader which is a read only forward type of cursor, fast and efficient when you just want to read a recordset. DataReader is bindable but you read it one record at the time and therefore don't have the options of going back, for instance. If the recordset is very big the reader is also good because it stores just one record at the time in memory.
You can use the DataAdapter and get a DataSet and then you have a complete control of all the data within the DataSet-class. It is heavier on the system but very powerful when you need to work with the data in you application. You can also use DataSet if the query returns more than one recordset.
So it really depends on what you need to do with the data after getting it from the database. If you just need to read it into something else, use DataReader otherwise DataSet.
Does ExecuteScalar() have any advantages over ExecuteReader()?
ExecuteScalar only returns the first value from the first row of the dataset. Internal it is treated just like ExecuteReader(), a DataReader is opened, the value is picked and the DataReader gets destroyed afterwards. I also always wondered about that behavior, but it has one advantage: It takes place within the Framework...and you can't compete with the Framework in manners of speed.
Edit By rwwilden:
Taking a look with Reflector inside SqlCommand.ExecuteScalar() you can see these lines:
SqlDataReader ds = this.RunExecuteReader(
CommandBehavior.Default, RunBehavior.ReturnImmediately, true, "ExecuteScalar");
obj2 = this.CompleteExecuteScalar(ds, false);
Exactly what happens inside ExecuteReader. Another advantage is that ExecuteScalar returns null when no data is read. If you use ExecuteReader, you'd have to check this yourself.
From SqlCommand.ExecuteScalar Method
Use the ExecuteScalar method to
retrieve a single value (for example,
an aggregate value) from a database.
This requires less code than using the ExecuteReader method, and then
performing the operations that you
need to generate the single value
using the data returned by a
SqlDataReader.
Also from What is the difference between ExecuteReader, ExecuteNonQuery and ExecuteScalar
ExecuteReader :Use for accessing
data. It provides a forward-only,
read-only, connected recordset.
ExecuteNonQuery :Use for data
manipulation, such as Insert, Update,
Delete.
ExecuteScalar :Use for retriving 1
row 1 col. value., i.e. Single value.
eg: for retriving aggregate function.
It is faster than other ways of
retriving a single value from DB.
From ExecuteScalar page on MSDN:
Use the ExecuteScalar method to retrieve a single value (for example, an aggregate value) from a database. This requires less code than using the ExecuteReader method, and then performing the operations that you need to generate the single value using the data returned by a SqlDataReader
So, it's not faster or better, but is used to reduce the amount of code written when only one value is needed.
When you have a single value returned from your Query or SP it's always better to use ExecuteScalar() as it retrieves the first value of the result. Hence, this is faster in this kind of situation.
Execute Scalar intended to get single value from the database while Execute Reader to get multiple records into DataTable.
ExecuteScalar() will take less resources compared to the ExecuteReader() as later will return the multiple column data from the database.
ExecuteReader() will be instantiating the SqlDataReader which is stream based and query the results from the data source