Get number of result sets from a SqlDataReader - c#

I have an SQL Server stored procedure that returns multiple results. The body of the stored procedure might look like this:
SELECT * FROM tableA;
SELECT * FROM tableB;
SELECT * FROM tableC;
In that case, the stored procedure returns 3 result sets. Other stored procedures might return, e.g., 1, 0, or any number of result sets. Each result set might contain 0 or more rows in it. When loading these, I will need to call IDataReader.NextResult() to navigate between result sets.
How can I reliably get the count of result sets (not row counts) in C#?

There seems to be no property or method that directly calculates the result count in IDataReader. This interface rather intends to be consumed in an incremental/streaming fashion. So, to count the number of result sets returned, increment a counter every time you call IDataReader.NextResult() and it returns true while consuming the data.
However, there is a catch. The
The documentation for IDataReader.NextResult() states:
By default, the data reader is positioned on the first result.
Consider the following scenarios:
The command returned 0 result sets. Your first call to IDataReader.NextResult() returns false.
The command returned 1 result set. Your first call to IDataReader.NextResult() returns false.
The command returned 2 result sets. Your second call to IDataReader.NextResult() returns false.
You can see that we have enough information to count the number of result sets as long as there is at least one result set. That would be the number of times that IDataReader.NextResult() returned true plus one.
To detect whether or not there are 0 result sets, we use another property from the reader: IDataRecord.FieldCount. The documentation for this property states:
When not positioned in a valid recordset, 0; otherwise, the number of columns in the current record. The default is -1.
Thus, we can read that field when first opening the reader to determine if we are in a valid result set or not. If the command generates no result sets, the value of IDataRecord.FieldCount on the reader will initially be less than 1. If the command generates at least one result set, the value will initially be positive. This assumes that it is impossible for a result set to have 0 columns (which I think you can assume with SQL, not sure).
So, I would use something like the following to count the number of result sets. If you also need to save the data, that logic must be inserted into this:
using (var reader = command.ExecuteReader())
{
var resultCount = 0;
do
{
if (reader.FieldCount > 0)
resultCount++;
while (reader.Read())
{
// Insert logic to actually consume data here…
// HandleRecordByResultIndex(resultCount - 1, (IDataRecord)reader);
}
} while (reader.NextResult());
}
I’ve tested this with System.Data.SqlClient and the commands PRINT 'hi' (0 result sets), SELECT 1 x WHERE 1=0 (1 result set), and SELECT 1 x WHERE 1=0; SELECT 1 x WHERE 1=0 (2 result sets).

Use DataReader.NextResult to advance the reader to the next result set.:
using (var con = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
using (var cmd = new SqlCommand("SELECT * FROM TableA; SELECT * FROM TableB; SELECT * FROM TableC;", con))
{
con.Open();
using (IDataReader rdr = cmd.ExecuteReader())
{
while (rdr.Read())
{
int firstIntCol = rdr.GetInt32(0); // assuming the first column is of type Int32
// other fields ...
}
if (rdr.NextResult())
{
while (rdr.Read())
{
int firstIntCol = rdr.GetInt32(0); // assuming the first column is of type Int32
// other fields ...
}
if (rdr.NextResult())
{
while (rdr.Read())
{
int firstIntCol = rdr.GetInt32(0); // assuming the first column is of type Int32
// other fields ...
}
}
}
}
}
}

Another solution to be aware of, in addition to the manual SqlDataReader method, in the accepted answer, is to use a SqlDataAdapter and DataSets and DataTables.
When using those classes, the entire result set is retrieved from the server in one go, and you can iterate them at your leisure. Also, several other .net classes are aware of DataSets and DataTables and can be hooked up to them directly, for read-only or read-write data access, if you also set the DeleteCommand, InsertCommand, and UpdateCommand properties. What you get "for free," with that, is the ability to alter the data in the DataSet, and then simply call Update() to push your local changes to the database. You also gain the RowUpdated event handler, which you can use to make that happen automatically.
DataSets and DataTables retain the metadata from the database schema, as well, so you can still access columns by name or index, as you please.
Overall, they're a nice feature, though certainly heavier weight than a SqlDataReader is.
Documentation for SqlDataAdapter is here: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqldataadapter

Related

cmd.executescalar() works but throws ORA-25191 Exception

my Code is working, the function gives me the correct Select count (*) value but anyway, it throws an ORA-25191 Exception - Cannot reference overflow table of an index-organized table tips,
at retVal = Convert.ToInt32(cmd.ExecuteScalar());
Since I use the function very often, the exceptions slow down my program tremendously.
private int getSelectCountQueryOracle(string Sqlquery)
{
try
{
int retVal = 0;
using (DataTable dataCount = new DataTable())
{
using (OracleCommand cmd = new OracleCommand(Sqlquery))
{
cmd.CommandType = CommandType.Text;
cmd.Connection = oraCon;
using (OracleDataAdapter dataAdapter = new OracleDataAdapter())
{
retVal = Convert.ToInt32(cmd.ExecuteScalar());
}
}
}
return retVal;
}
catch (Exception ex)
{
exceptionProtocol("Count Function", ex.ToString());
return 1;
}
}
This function is called in a foreach loop
// function call in foreach loop which goes through tablenames
foreach (DataRow row in dataTbl.Rows)
{...
tableNameFromRow = row["TABLE_NAME"].ToString();
tableRows=getSelectCountQueryOracle("select count(*) as 'count' from " +tableNameFromRow);
tableColumns = getSelectCountQueryOracle("SELECT COUNT(*) as 'count' FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name='" + tableNameFromRow + "'");
...}
dataTbl.rows in this outer loop, in turn, comes from the query
SELECT * FROM USER_TABLES ORDER BY TABLE_NAME
If you're using a database-agnostic API like ADO.Net, you would almost always want to use the API's framework to fetch metadata rather than writing custom queries against each database's metadata tables. The various ADO.Net providers are much more likely to write data dictionary queries that handle all the various corner cases and are much more likely to be optimized than the queries you're likely to write. So rather than writing your own query to populate the dataTbl data table, you'd want to use the GetSchema method
DataTable dataTbl = connection.GetSchema("Tables");
If you want to keep your custom-coded data dictionary query for some reason, you'd need to filter out the IOT overflow tables since you can't query those directly.
select *
from user_tables
where iot_type IS NULL
or iot_type != 'IOT_OVERFLOW'
Be aware, however, that there are likely to be other tables that you don't want to try to get a count from. For example, the dropped column indicates whether a table has been dropped-- presumably, you don't want to count the number of rows in an object in the recycle bin. So you'd want a dropped = 'NO' predicate as well. And you can't do a count(*) on a nested table so you'd want to have a nested = 'NO' predicate as well if your schema happens to contain nested tables. There are probably other corner cases depending on the exact set of features your particular schema makes use of that the developers of the provider have added code for that you'd have to deal with.
So I'd start with
select *
from user_tables
where ( iot_type IS NULL
or iot_type != 'IOT_OVERFLOW')
and dropped = 'NO'
and nested = 'NO'
but know that you'll probably need/ want to add some additional filters depending on the specific features users make use of. I'd certainly much rather let the fine folks that develop the ADO.Net provider worry about all those corner cases than to deal with finding all of them myself.
Taking a step back, though, I'd question why you're regularly doing a count(*) on every table in a schema and why you need an exact answer. In most cases where you're doing counts, you're either doing a one-off where you don't much care how long it takes (i.e. a validation step after a migration) or approximate counts would be sufficient (i.e. getting a list of the biggest tables in the system in order to triage some effort or to track growth over time for projections) in which case you could just use the counts that are already stored in the data dictionary- user_tables.num_rows- from the last time that statistics were run.
This article helped me to solve my problem.
I've changed my query to this:
SELECT * FROM user_tables
WHERE iot_type IS NULL OR iot_type != 'IOT_OVERFLOW'
ORDER BY TABLE_NAME

What is the best way to check if a record exists in a SQL Server table using C#?

What is the best way? To get a 1 or 0 back? Or check if rows are available from a query? I'm arguing for ExecuteScalar but interested in other answers why or why not.
//using DataReader.HasRows?
bool result = false;
var cmd = new SqlCommand("select foo, bar from baz where id = 123", _sqlConnection, _sqlTransaction);
cmd.CommandType = System.Data.CommandType.Text;
using (var r = cmd.ExecuteReader())
{
if (r != null && r.HasRows)
{
result = true;
}
}
return result;
//or using Scalar?
bool result = false;
var cmd = new SqlCommand("if exists(select foo, bar from baz where id = 123) select 1 else select 0", _sqlConnection, _sqlTransaction);
cmd.CommandType = System.Data.CommandType.Text;
int i = (int) cmd.ExecuteScalar();
result = i == 1;
return result;
Exists is more efficient than Count, because count needs to scan all rows to match the criteria and include in the count, exist dont.
So exists with ExecuteScalar is better.
As more info backing this:
According to http://sqlblog.com/blogs/andrew_kelly/archive/2007/12/15/exists-vs-count-the-battle-never-ends.aspx
Both queries scanned the table but the EXISTS was able to at least do
a partial scan do to the fact it can stop after it finds the very
first matching row. Where as the COUNT() must read each and every row
in the entire table to determine if they match the criteria and how
many there are. That is the key folks. The ability to stop working
after the first row that meets the criteria of the WHERE clause is
what makes EXISTS so efficient. The optimizer knows of this behavior
and can factor that in as well. Now keep in mind that these tables are
relatively small compared to most databases in the real world. So the
figures of the COUNT() queries would be multiplied many times on
larger tables. You could easily get hundred's of thousands of reads or
more on tables with millions of rows but the EXISTS will still only
have just a few reads on any queries that can use an index to satisfy
the WHERE clause.
As a simple experiment using AdventureWorks with MSSQL 2012
set showplan_all on
-- TotalSubtreeCost: 0.06216168
select count(*) from sales.Customer
-- TotalSubtreeCost: 0.003288537
select 1 where exists (select * from sales.Customer)
See also
http://sqlmag.com/t-sql/exists-vs-count
UPDATE: On ExecuteScalar vs ExecuteReader.
Having a look with a disassembler (like Reflector) on the Implementation of System.Data.SqlClient.SqlCommand methods, shows something surprising, they are kind of equivalent: both end up calling the internal helper
internal SqlDataReader RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, bool returnStream, string method, TaskCompletionSource completion, int timeout, out Task task, bool asyncWrite = false)
which returns a SqlDataReader, the ExecuteReader returns it as is.
While ExecuteScalar consumes it with another helper:
private object CompleteExecuteScalar(SqlDataReader ds, bool returnSqlValue)
{
object obj2 = null;
try
{
if (!ds.Read() || (ds.FieldCount <= 0))
{
return obj2;
}
if (returnSqlValue)
{
return ds.GetSqlValue(0);
}
obj2 = ds.GetValue(0);
}
finally
{
ds.Close();
}
return obj2;
}
As a side note, same goes with MySQL Connector/NET (The official ADO.NET open source driver for MySQL), the method ExecuteScalar internally creates an DataReader (MySqlDataReader to be more precise) and consumes it. See on source file /Src/Command.cs (from https://dev.mysql.com/downloads/connector/net/ or https://github.com/mysql/mysql-connector-net).
Summary: Regarding the ExecuteScalar vs ExecuteReader both incurr in the overhead of creating a SqlDataReader, I would say the difference is mostly idiomatic.
I'd go with ExecuteScalar with a query like your if exists. It should be as fast as possible on the server and with minimal network traffic.
If you only care about existence, I would use the scalar approach but also update the TSQL to be:
SELECT CASE WHEN EXISTS(SELECT ...) THEN 1 ELSE 0 END
I would use ExecuteScalar with a slightly different query:
string sql = "SELECT CASE WHEN exists(select NULL from baz where id = 123) THEN 1 ELSE 0 END";
var cmd = new SqlCommand(sql, _sqlConnection, _sqlTransaction);

SqlDataReader Reader.Read() shows Enumeration yielded no results

I am Trying to generate random Ids from a given table. I can see the random number generated in debug but when I reach to reader.Read() line it shows Enumeration yielded no results.
I couldn't quite get what I am missing.
private static void GetRandomId(int maxValue)
{
string connectionString =
"Data Source=local;Initial Catalog=Test;user id=Test;password=Test123;";
string queryString = #"SELECT TOP 1 Id from Pointer WHERE Id > (RAND() * #max);";
using (var connection = new SqlConnection(connectionString))
{
var command = new SqlCommand(queryString, connection);
command.Parameters.AddWithValue("#max", maxValue);
connection.Open();
using (var reader = command.ExecuteReader()) <-- // Here I can see the randon value generated
{
while (reader.Read())
{
//Here reader shows : Enumeration yielded no results
Console.WriteLine("Value", reader[1]);
reader.Close();
}
}
}
}
Since you are basically searching for a random Id of an existing record, I believe this may cover what you are trying to do:
Random record from a database table (T-SQL)
SELECT TOP 1 Id FROM Pointer ORDER BY NEWID()
Use SqlCommand.ExecuteScalar Method instead
https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar%28v=vs.110%29.aspx
var dbRandomId = command.ExecuteScalar();
var randomId = Convert.IsDbNull(dbRandomId) ? (int?)null : (int)dbRandomId;
// you now also know if an id was returned with randomId.HasValue
https://msdn.microsoft.com/en-us/library/system.convert.isdbnull%28v=vs.110%29.aspx
Issues with your example:
Issue 1: Couldn't you have #max be computed with a SELECT #max = MAX(Id) FROM Pointer? No need to pass it in a parameter. Or am I missing the point? Is that a deliberate limit?
Issue 2: Shouldn't it be reader[0] or reader["Id"]? I believe columns are zero based and your selected column's name is "Id".
Issue 3: Be careful not to enumerate somehow the reader via the Debugger because you'll actually consume (some of?) the results right there (I'm guessing you are doing this by your comment "// Here I can _see_ the random value generated") and by the time the reader.Read() is encountered there will be no results left since the reader has already been enumerated and it won't "rewind".
https://msdn.microsoft.com/en-us/library/aa326283%28v=vs.71%29.aspx
DataReader cursor rewind
Issue 4: Why do you close the reader manually when you've already ensured the closing & disposal with using? You also already know it's going to be one record returned (the most) with the TOP 1.
If you check the results of the sqlDataReader in the debugger, the results are then gone, and wont be found by the Read() event

Difference between SqlDataReader.Read and SqlDataReader.NextResult

What is the main difference between these two methods? On the msdn website it is explained like below but I don't understand it.
Read Advances the SqlDataReader to the next record. (Overrides
DbDataReader.Read().)
NextResult Advances the data reader to the next
result, when reading the results of batch Transact-SQL statements. (Overrides dbDataReader.NextResult().)
If your statement/proc is returning multiple result sets, For example, if you have two select statements in single Command object, then you will get back two result sets.
NextResult is used to move between result sets.
Read is used to move forward in records of a single result set.
Consider the following example:
If you have a proc whose main body is like:
.... Proc start
SELECT Name,Address FROM Table1
SELECT ID,Department FROM Table2
-- Proc End
Executing the above proc would produce two result sets. One for Table1 or first select statement and other for the next select statement.
By default first result set would be available for Read. If you want to move to second result set, you will need NextResult.
See: Retrieving Data Using a DataReader
Example Code from the same link: Retrieving Multiple Result Sets using NextResult
static void RetrieveMultipleResults(SqlConnection connection)
{
using (connection)
{
SqlCommand command = new SqlCommand(
"SELECT CategoryID, CategoryName FROM dbo.Categories;" +
"SELECT EmployeeID, LastName FROM dbo.Employees",
connection);
connection.Open();
SqlDataReader reader = command.ExecuteReader();
while (reader.HasRows)
{
Console.WriteLine("\t{0}\t{1}", reader.GetName(0),
reader.GetName(1));
while (reader.Read())
{
Console.WriteLine("\t{0}\t{1}", reader.GetInt32(0),
reader.GetString(1));
}
reader.NextResult();
}
}
}
Not strictly an answer to this question but if you use the DataTable.Load method to consume data from the reader rather than Reader.Read note that after the Load method has completed, the reader is now placed at the start of the next result set so you should not call the NextResult method otherwise you will skip the next resultset.
A simple loop on Reader.HasRows around a DataTable.Load call is all that you need to process potential multiple resultsets in this scenario.

What is the safest practical way to deal with non-required MS Access text fields in queries?

In querying MS Access, I've learned (How can I preempt a "Specified cast is not valid" exception?) that I have to query defensively where Text (string) values may be empty (or, apparently, in actuality null) by using the "IIF(ISNULL(colName),'',colName)" construct, such as:
SELECT id, pack_size, IIF(ISNULL(description),'',description), department, subdepartment, IIF(ISNULL(vendor_id),'',vendor_id), IIF(ISNULL(vendor_item),'',vendor_item), avg_cost, list_cost FROM PhunkyPlatypi ORDER BY id
I assume that this is only necessary for Text columns that have been designated required == false. Is my ASSumption wrong - do I have to do this with all non-required columns?
Specifically, if I need to query defensively regarding columns of data type double, is this the way to do that:
IIF(ISNULL(two_bagger),0.0,two_bagger)
?
Or better yet (one can always hope): Is there some cleaner/less obtrusive way of dealing with result sets that do not contain data in every column?
If it makes any difference, I'm querying the MS Access database from a .NET 4.5.1 Web API app using OleDbDataReader (old whine in new wineskins?)
UPDATE
Re: HansUp's (Reach for the Sky?) suggestion: "Maybe it would be more productive to attack this from the .Net side and make the code more accommodating of Nulls", would something like this be the way to do it, or is there a more efficient/safer way:
if (null == oleDbD8aReader.GetString(2))
{
description = "Blank description";
}
else
{
description = oleDbD8aReader.GetString(2);
}
?
UPDATE 2
I changed the code to check for DBNull, setting the value to a generic one based on the data type (string.empty for Text, 0 for ints, 0.00 for double) when it IS DBNull, but I still get the same err msg.
UPDATE 3
I'm getting "Specified cast is invalid" on this line:
long RedemItemId = (oleDbD8aReader["dbp_id"] is DBNull ? 0 : (long)oleDbD8aReader["dbp_id"]);
dbp_id is a LongInt in the Access table
The data being returned from the query includes these values in that column:
5
20
30
40
45
60
70
75
90
120
120
...so how could any of these values be failling a cast to long? Should I be using "Convert.ToX()" instead of "(long)"? Or...???
(in response to the add-on question about handling things on the client side...)
OleDbDataReader will return an Access database Text field as either System.String (if it contains a value), or System.DBNull (if it is Null in the database).
So, if you want to convert DBNull values to empty (zero-length) strings just use
cmd.CommandText =
"SELECT txtCol FROM Clients WHERE ID = 3";
OleDbDataReader rdr = cmd.ExecuteReader();
rdr.Read();
string result = rdr["txtCol"].toString();
In the cases where you do care if the value returned was DBNull then test for it
cmd.CommandText =
"SELECT txtCol FROM Clients WHERE ID = 3";
OleDbDataReader rdr = cmd.ExecuteReader();
rdr.Read();
string result;
if (rdr["txtCol"] is DBNull)
{
result = "{That field was Null.}";
}
else
{
result = rdr["txtCol"].ToString();
}
(Note that in C#, null and DBNull are different critters.)
Edit
Similarly, for numeric database fields (for example, of type Double), you can "force" nulls to zero with
cmd.CommandText =
"SELECT dblCol FROM Clients WHERE ID = 3";
OleDbDataReader rdr = cmd.ExecuteReader();
rdr.Read();
double dResult = (rdr["dblCol"] is DBNull ? 0 : Convert.ToDouble(rdr["dblCol"]));
"for Text columns that have been designated required == false ... do I have to do this with all non-required columns?"
Perhaps. Columns with other data types (numeric, Date/Time, etc.) could contain Null if you haven't set Required = True for them. Also, with a LEFT or RIGHT JOIN you could get Nulls in the unmatched rows even if all columns in the source tables have Required = True. If you don't want any Nulls in your query output, you would have to do the substitutions for all possible cases.
"regarding columns of data type double, is this the way to do that"
IIF(ISNULL(two_bagger),0.0,two_bagger)
Yes, that should work. Or you could do it this way if you prefer ...
IIF(two_bagger Is Null,0.0,two_bagger)
From OleDb, I don't believe there is "some cleaner/less obtrusive way of dealing with result sets that do not contain data in every column". Maybe it would be more productive to attack this from the .Net side and make the code more accommodating of Nulls ... then you wouldn't need to ask the db engine to substitute something else for Nulls.
There is a non SQL non Jet but Access function NZ which could help
IIF(ISNULL(description),'',description)
almost equal to
nz(description,'')
Check if it is avaiable through oleDbReader

Categories