I want to know if a multiple active result set, MARS, exists for the Microsoft's Access database? I am aware this exists for SQL Server. I tried using it with Access but it didn't work for me. I want to know how to use MARS with Access.
In short, Microsoft Access does not support multiple active result sets (MARS). It is not supported by the ODBC provider and the reason why that is not the case should be obvious if you think about it in terms of what MARS actually offers you from a performance stand point.
If you think about the most important reason for MARS to exist is if you have stored procedures executed on a SQL server that produce multiple result sets. If you have such queries you need to be able to somehow access those multiple results sets.
But in Access there is no such thing as stored procedures. If you have multiple queries you can just execute each one of them separately and get the result set for each. Hence, no need for MARS.
NOTE
In light of the comments, here's an example of how to have two data readers open at the same time:
using(var connection1 = new OdbcConnection("your connection string here"))
{
connection1.Open();
using(var connection2 = new OdbcConnection("your connection string here"))
{
connection2.Open();
using(var cmd1 = connection1.CreateCommand())
{
cmd1.CommandText = "YOU FIRST QUERY HERE";
using(var dataReader1 = cmd1.ExecuteReader())
{
while(dataReader1.Read())
{
// keep reading data from dataReader1 / connection 1
// .. at some point you may need to execute a second query
using(var cmd2 = connection2.CreateCommand())
{
cmd2.CommandText = "YOUR SECOND QUERY HERE";
// you can now execute the second query here
using(var dataReader2 = cmd2.ExecuteReader())
{
while(dataReader2.Read())
{
}
}
}
}
}
}
connection2.Close();
}
connection1.Close();
}
Related
Here is a sample code of using the SqlDataReader:
// Working with SQLServer and C#
// Retrieve all rows
cmd.CommandText = "SELECT some_field FROM data";
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine(reader.GetString(0));
}
}
EDIT :
I mean I want to understand whether there is the same mechanism when retrieving data from database in a while-loop (in case of SqlDataReader) as it does when working with the SQLite.
// working with SQLite and Java
if (cursor.moveToFirst()) {
do {
String data = cursor.getString(cursor.getColumnIndex("data"));
// do what ever you want here
} while(cursor.moveToNext());
}
cursor.close();
No, there's no cursor on the server side unless your command is calling a stored procedure that uses a cursor. The server is returning a plain-vanilla SQL result set, one row at a time, when you use a SqlDataReader. Obviously, the data's got to be somewhere before you can read it, and that place would be buffer(s) that SQL Server and the drivers manage.
If you were to push this into using something like a Dataset, then all the rows would be in RAM at once.
While you use cursor which returns many dbsets you can obtain many result sets like that:
while (oSqlDataReader.HasRows)
{
while (oSqlDataReader.Read())
{
// oSqlDataReader.Getdata...
}
oSqlDataReader.NextResult();
}
We track the same information across two databases in tables that have a similar (enough) schema. When we update the data in one database we want to make sure the data stays in sync with the table in the other database.
We use Entity Framework 5 in both databases, so I had originally wanted to simply import a DbContext of the secondary database and use TransactionsScope to make sure the Create/Updates were atomic.
However, I quickly found out that would be a pain to code, since the table names are the same (anyone working in this controller would have to refer to the Product table as <Conext>.Product), so I used a SqlConnection object for the secondary table, but received some results I don't quite undestand.
If I use the syntax below, the two tables will update atomically/everything goes as planned.
var scopeOptions = new TransactionOptions();
scopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
scopeOptions.Timeout = TimeSpan.MaxValue;
var sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Monet"].ConnectionString);
sqlConn.Open();
SqlCommand sqlCommand = sqlConn.CreateCommand();
sqlCommand.CommandText = InsertMonetProduct(product);
using (var ts = new TransactionScope(TransactionScopeOption.Required, scopeOptions))
{
db.Product.Add(product);
db.SaveChanges();
sqlCommand.ExecuteNonQuery();
ts.Complete();
}
However if I use this syntax below the code crashes on the db.SaveChanges() command with the following message:
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
var scopeOptions = new TransactionOptions();
scopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
scopeOptions.Timeout = TimeSpan.MaxValue;
using (var ts = new TransactionScope(TransactionScopeOption.Required, scopeOptions))
{
using(var sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Monet"].ConnectionString))
{
sqlConn.Open();
using (SqlCommand sqlCommand = sqlConn.CreateCommand())
{
sqlCommand.CommandText = InsertMonetProduct(product);
sqlCommand.ExecuteNonQuery();
db.Product.Add(product);
db.SaveChanges();
}
ts.Complete();
}
}
Any idea why the first syntax works and the second crashes? From what I've read online this is supposed to be a change made on the database/database server itself.
The second bit of code is causing an error because it is opening multiple database connections within a single TransactionScope. When a program opens a second database connection inside of a single scope, it gets promoted to a distributed transaction. You can read more information about distributed transactions here.
Searching for "multiple database connections in one transaction scope" is going to help you find a lot more StackOverflow posts. Here are two relevant ones:
C# controlling a transaction across multiple databases
How do you get around multiple database connections inside a TransactionScope if MSDTC is disabled?
Before you walk off into the land of distributed transactions though, there may be a simpler solution for this case. Transaction scopes can be nested, and parent scopes will rollback if any of their nested scopes fail. Each scope only has to worry about one connection or just nested scopes, so we may not run into the MSDTC issue.
Give this a try:
var scopeOptions = new TransactionOptions();
scopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
scopeOptions.Timeout = TimeSpan.MaxValue;
using (var ts = new TransactionScope(TransactionScopeOption.Required, scopeOptions))
{
using (var scope1 = new TransactionScope(TransactionScopeOption.Required))
{
// if you can wrap a using statment around the db context here that would be good
db.Product.Add(product);
db.SaveChanges();
scope1.Complete();
}
using (var scope2 = new TransactionScope(TransactionScopeOption.Required))
{
// omitted the other "using" statments for the connection/command part for brevity
var sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Monet"].ConnectionString);
sqlConn.Open();
SqlCommand sqlCommand = sqlConn.CreateCommand();
sqlCommand.CommandText = InsertMonetProduct(product);
sqlCommand.ExecuteNonQuery(); // if this fails, the parent scope will roll everything back
scope2.Complete();
}
ts.Complete();
}
In a web app, I am using SQL server. However, when I try to store some bulk amount of data, it misses some of the records and does not insert them into the database. I want to know whether there is any commit statement or synchronization for the database? Data is being sent object by object using an ajax call.
Here is my code:
try
{
int surah = Convert.ToInt32(Request["surah"]);
string verse = Request["data"];
string connectionString = #"Data Source=(LocalDB)\v11.0;AttachDbFilename=C:\PROGRAM FILES (X86)\MICROSOFT SQL SERVER\MSSQL.1\MSSQL\DATA\PEACE_QURAN.MDF;Integrated Security=True";
System.Data.SqlClient.SqlConnection connection = new SqlConnection(connectionString);
string query = "insert into Ayyat_Translation_Language_old_20131209 values(null,null,"+surah+",'"+verse+"')";
SqlCommand cmd = new SqlCommand(query, connection);
connection.Open();
cmd.ExecuteNonQuery();
connection.Close();
}
catch(Exception e){
System.IO.StreamWriter file = new System.IO.StreamWriter(#"E:\Office_Work\Peace_Quran\Peace_Quran\Files\ExceptionFile.txt", true);
file.WriteLine("exception details : "+e.ToString());
file.Close();
}
As you understand, the records cannot get lost in the way. Either the INSERT statement would execute, or you would get an exception. Since neither is happening, I believe that you loose something in the request generating mechanism.
I would strongly suggest to put some logging message on each request. You will probably find out that your requests are less than you thought. This could be for a number of reasons, but since I don't know the exact mechanism calling the server side code, I cannot have an opinion.
Hope I helped!
can't I open new data reader in existing data reader?? plzz help me. i'm new to c#
string statement11 = "SELECT Planning FROM allow where NPLID=(SELECT MAX(NPLID) FROM allow)";
SqlCommand myCommand11 = new SqlCommand(statement11, con1);
SqlDataReader plan2 = myCommand11.ExecuteReader();
while(plan2.Read())
if (!plan2.IsDBNull(0) && "ok" == plan2.GetString(0))
{
string statement99 = "SELECT Dropplan FROM NPLQAnew where NPLID=(SELECT MAX(NPLID) FROM allow)";
SqlDataReader myReader1 = null;
SqlCommand myCommand114 = new SqlCommand(statement99, con1);
SqlDataReader plandrop = myCommand114.ExecuteReader();
while (plandrop.Read())
if (plandrop.IsDBNull(0) && plandrop.GetString(0) == "Red")
{
Lblplan1.BackColor = System.Drawing.Color.Red;
}
else if (plandrop.IsDBNull(0) && "amber" == plandrop.GetString(0))
{
Lblplan1.BackColor = System.Drawing.Color.Orange;
}
else if (plandrop.IsDBNull(0) && "Green" == plandrop.GetString(0))
{
Lblplan1.BackColor = System.Drawing.Color.Green;
}
plandrop.Close();
this.Lblplan1.Visible = true;
}
plan2.Close();
By default, the SQL Server client will not let you open two simultaneous queries on the same connection. If you are in the process of reading the results of one data reader, for example, you cannot use the same connection to start reading from a second. And, with the way that SQL Server connection pooling works, even asking for a "new" connection is not guaranteed to work either.
You have a couple of options on how to fix this. The first is to refactor your code to eliminate the nested SQL execute calls; for example, load the results of your first query into memory before you loop through and process them.
An easier answer is to enable "MARS" - Multiple Active Recordsets - on your connection. This is done be setting the "MARS Connection=True option on the connection string to turn the feature on. This is generally pretty safe to do, and it's only off by default to preserve the pre-2005 behavior for old applications, but you the linked article does give some guidelines.
You can try setting MultipleActiveResultSets=True in your connection string
No you cant perform this on same connection, but you can achieve by
Multiple Active Result Sets (MARS), am hoping you having sqlserver 2005 and above.
or
You need to different connection to be opened for the second command.
Use USING statement. The using statement calls the Dispose method on the object in the correct way, and it also causes the object itself to go out of scope as soon as Dispose is called.
For error : There is already data reader attached to current connection string. Try to close the data reader first.
after this line
if (!plan2.IsDBNull(0) && "ok" == plan2.GetString(0))
{
//open a new sql connection here
//your string statement99
:
:
// then close your second sql connection here before the last }
}
i have the code below trying to do a bulk copy from oracle to SQL server 2005 and it keeps timing out. how can i extend the oracle connection timeout? it seems i can not from what i read on the web.
OracleConnection source = new OracleConnection(GetOracleConnectionString());
source.Open();
SqlConnection dest = new SqlConnection(GetSQLConnectionString() );
dest.Open();
OracleCommand sourceCommand = new OracleCommand(#"select * from table");
using (OracleDataReader dr = sourceCommand.ExecuteReader())
{
using (SqlBulkCopy s = new SqlBulkCopy(dest))
{
s.DestinationTableName = "Defects";
s.NotifyAfter = 100;
s.SqlRowsCopied += new SqlRowsCopiedEventHandler(s_SqlRowsCopied);
s.WriteToServer(dr);
s.Close();
}
}
source.Close();
dest.Close();
here is my oracle connection string:
return "User Id=USER;Password=pass;Data Source=(DESCRIPTION=" +
"(ADDRESS=(PROTOCOL=TCP)(HOST=14.12.7.2)(PORT=1139))" +
"(CONNECT_DATA=(SID=QCTRP1)));";
You can set s.BulkCopyTimeout option
In your connection string, there is a 'Connection Lifetime' and 'Connection Timeout' parameter. You can set it accordingly. See here for the full reference.
BTW, I know you didn't ask this, but have you considered an ETL tool for migrating your DB records (e.g. Informatica, FME, etc.)? While your approach is valid, it isn't going to be very performant since you are hydrating all of the records from one DB to the client and then serializing them to another DB. For small bulk sets, this isn't a big issue, but if you were processing hundreds of thousands of rows, you might want to consider an official ETL tool.