Using ASP.NET 3.5, Linq to SQL, SQL Server 2005 on Windows Server 2003. Running VS 2008 on XP SP3 locally.
We need to be able to wrap inserts, updates, and deletes in a transaction. When we first tried this by wrapping code blocks with using(var trans = new TransactionScope()) { ...; trans.Complete(); }, we got an appropriate exception telling us we needed to enable network access for remote transactions. We did so and things began to work the way we expected.
Fast-forward to today. There is a little-used part of our app that also received a TransactionScope treatment. Though transactions work properly in all other parts of our codebase, we discovered today that this seldom used piece is throwing the same “Network Access” exception as before:
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool. http://img101.imageshack.us/img101/5480/msdtcnetworkaccesserror.jpg
Here's the code that causes the exception:
using (TransactionScope trans = new TransactionScope(TransactionScopeOption.Required, TimeSpan.MaxValue))
{
using (var dc = new ChargeXferDataContext())
{
//create 'Line' object and set initial values
Line line = new Line();
line.Unit_Num = UnitId;
line.SubmittedBy = Viewer.Alias();
line.LineSubmittedOn = DateTime.Now;
//get codes to move from checked rows
//iterate rows in current gridview
foreach (GridViewRow row in gv.Rows)
{
//if checked, insert move order
HtmlInputCheckBox cb = (HtmlInputCheckBox)row.FindControl("RowLevelCheckBox");
if (cb.Checked)
{
//1st: get required values
int id = Convert.ToInt32(((TextBox)row.FindControl("fldCodeId")).Text);
int newId = Convert.ToInt32(((DropDownList)row.FindControl("ddlNewId")).SelectedValue);
char newPOA = Convert.ToChar(((DropDownList)row.FindControl("ddlPOA")).SelectedValue);
//2nd: get current diag code from old patient
//######## Exception happens here...
DiagCode code = dc.DiagCodes.SingleOrDefault(c => c.Id == id);
//########
//3rd: add code to emenline object
addCode(line, code, newId, newPOA);
}
}
dc.SubmitChanges();
trans.Complete();
}
}
If you've got any suggestions, they would be appreciated. Let me know if I can explain something more. Thanks in advance!!
Related
Consider the following example code:
using System.Data.SqlClient;
namespace ReportLoadTest
{
class Program
{
static void Main(string[] args)
{
using (var con = new SqlConnection("...your connection string here..."))
{
con.Open();
var trans = con.BeginTransaction();
var cmd = con.CreateCommand();
cmd.Transaction = trans;
cmd.CommandText = #"insert SomeTable(...columns...) values (...); select scope_identity()";
var rows = cmd.ExecuteScalar();
var rs = new SSRS.ReportExecutionService();
rs.Credentials = System.Net.CredentialCache.DefaultCredentials;
rs.Url = "http://localhost/ReportServer/ReportExecution2005.asmx";
var ei = rs.LoadReport("/Folder/Folder/Some report", null);
}
}
}
}
Under what conditions would this program "get stuck" at the call to ReportExecutionService.LoadReport?
By stuck, I mean 0 CPU, 0 I/O - no progress being made at all by the calling program, Reporting Services or SQL Server.
This program will get stuck if the report that's being loaded contains a dataset that's used to populate the available values for a parameter and that dataset is based on a query that reads rows from SomeTable.
LoadReport will eventually time out and there will be zero helpful information left lying around to help you figure out what happened.
Possible solutions:
Change the report to do "dirty reads" on SomeTable
Change the database to snapshot isolation mode to avoid the lock on the table.
I ran into this as an actual production issue with a system that runs reports on a schedule, and the report being run was the "scheduled reports history" report.
The subtlety is that LoadReport runs queries - in retrospect, it's obvious that it must run queries since the available values for parameters are contained in the ExecutionInfo that's returned by LoadReport.
I'm trying to programmatically initialize a SQL Server database using DbMigrator class from EntityFramework:
var configuration = new MyDataBaseAssembly.Migrations.Configuration();
configuration.TargetDatabase = new System.Data.Entity.Infrastructure.DbConnectionInfo(myConnectionString, "System.Data.SqlClient");
var dbMigrator = new DbMigrator(configuration);
dbMigrator.Update();
using (var context = new MyDataBaseAssembly.MyDataBaseContext(myConnectionString))
{
// do some queries
context.MyTable1.ToList();
}
The weird thing is that the code works fine for all my database assemblies except one. For some reason one of the DB's is not getting initialized, and I'm receiving an exception:
System.Data.SqlClient.SqlException: Invalid object name 'dbo.MyTable1'.
When I open the SQL Server Profiler, I can see that all the migrations are being applied with no errors. However, when I open the database in SQL Server Management Studio, I can see that it contains no tables at all.
After further investigating, I found out that after dbMigrator.Update is executed, the transaction it uses is not committed to the DB (DBCC OPENTRAN outputs that the transaction is still open). And for some reason, the transaction data gets discarded before I try to do some queries using MyDataBaseContext.
What can be the cause of such behavior? Why does dbMigrator.Update commit all the necessary migrations for some databases, but not for others? Can I control dbMigrator transactions manually and force them to commit?
I ran into the same problem. I ended up working around it with
var scriptor = new MigratorScriptingDecorator(dbMigrator);
var script = scriptor.ScriptUpdate(sourceMigration: null, targetMigration: null);
using (var conn = new SqlConnection(connString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(script, conn))
{
cmd.ExecuteNonQuery();
}
}
Here is my code, it works when the database has the table "UserInformation"
public bool Save()
{
using (SqlConnection connection = new SqlConnection(ConnectionString)
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
try
{
using (var adapter = new UserInformationTableAdapter())
{
adapter.Connection = connection;
adapter.Transaction = transaction;
var table = new HelloDataSet.UserInformationDataTable();
HelloDataSet.UserInformationRow row = table.NewUserInformationRow();
row.UserName = userName;
row.Password = password;
row.Brithday = brithday;
table.Rows.Add(row);
adapter.Update(table);
transaction.Commit();
return true;
}
}
catch (Exception e)
{
transaction.Rollback();
return false;
}
finally
{
connection.Close();
}
}
}
However, when there is no table in the database, it will not create the "UserInformation" table in the database, it will jump to "catch" exception in line "adapter.Update(table);"
So my question is how can I create a new table in database if there is no "UserInformation" table in it. In addition, if the database already has the table "UserInformation" can I add a new column "Position" in that table?
Finally, I got the answer and want to share it out. First, I have to say that I put my question in a wrong way. What I really want is I have an application, and this application is connected with a database. However, I am allowed user to switch database. So, when the user switch to a new database I would like the application copy the entire database structure (not including the data) from the old one to the new one. Also, if I make some change (could be add a new column for one or more table, or add another new table) for the database in my application code, I would like every other database know the updates and make the same change by running my new application code.
So, here is my solution. I write a framework called "SchemaManager." It will create an additional table in each database, this table contains the version of the database. So, every time when I run my application the "SchemaManager" will check my hard code database version number with the database version number, if my hard code database version number is greater than the database version number, the "SchemaManager" will check the change and do the update for me.
I know my solution is not the best, but this is what I did. If anyone have anyother solution, please share with me and other people.
We track the same information across two databases in tables that have a similar (enough) schema. When we update the data in one database we want to make sure the data stays in sync with the table in the other database.
We use Entity Framework 5 in both databases, so I had originally wanted to simply import a DbContext of the secondary database and use TransactionsScope to make sure the Create/Updates were atomic.
However, I quickly found out that would be a pain to code, since the table names are the same (anyone working in this controller would have to refer to the Product table as <Conext>.Product), so I used a SqlConnection object for the secondary table, but received some results I don't quite undestand.
If I use the syntax below, the two tables will update atomically/everything goes as planned.
var scopeOptions = new TransactionOptions();
scopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
scopeOptions.Timeout = TimeSpan.MaxValue;
var sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Monet"].ConnectionString);
sqlConn.Open();
SqlCommand sqlCommand = sqlConn.CreateCommand();
sqlCommand.CommandText = InsertMonetProduct(product);
using (var ts = new TransactionScope(TransactionScopeOption.Required, scopeOptions))
{
db.Product.Add(product);
db.SaveChanges();
sqlCommand.ExecuteNonQuery();
ts.Complete();
}
However if I use this syntax below the code crashes on the db.SaveChanges() command with the following message:
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
var scopeOptions = new TransactionOptions();
scopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
scopeOptions.Timeout = TimeSpan.MaxValue;
using (var ts = new TransactionScope(TransactionScopeOption.Required, scopeOptions))
{
using(var sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Monet"].ConnectionString))
{
sqlConn.Open();
using (SqlCommand sqlCommand = sqlConn.CreateCommand())
{
sqlCommand.CommandText = InsertMonetProduct(product);
sqlCommand.ExecuteNonQuery();
db.Product.Add(product);
db.SaveChanges();
}
ts.Complete();
}
}
Any idea why the first syntax works and the second crashes? From what I've read online this is supposed to be a change made on the database/database server itself.
The second bit of code is causing an error because it is opening multiple database connections within a single TransactionScope. When a program opens a second database connection inside of a single scope, it gets promoted to a distributed transaction. You can read more information about distributed transactions here.
Searching for "multiple database connections in one transaction scope" is going to help you find a lot more StackOverflow posts. Here are two relevant ones:
C# controlling a transaction across multiple databases
How do you get around multiple database connections inside a TransactionScope if MSDTC is disabled?
Before you walk off into the land of distributed transactions though, there may be a simpler solution for this case. Transaction scopes can be nested, and parent scopes will rollback if any of their nested scopes fail. Each scope only has to worry about one connection or just nested scopes, so we may not run into the MSDTC issue.
Give this a try:
var scopeOptions = new TransactionOptions();
scopeOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
scopeOptions.Timeout = TimeSpan.MaxValue;
using (var ts = new TransactionScope(TransactionScopeOption.Required, scopeOptions))
{
using (var scope1 = new TransactionScope(TransactionScopeOption.Required))
{
// if you can wrap a using statment around the db context here that would be good
db.Product.Add(product);
db.SaveChanges();
scope1.Complete();
}
using (var scope2 = new TransactionScope(TransactionScopeOption.Required))
{
// omitted the other "using" statments for the connection/command part for brevity
var sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["Monet"].ConnectionString);
sqlConn.Open();
SqlCommand sqlCommand = sqlConn.CreateCommand();
sqlCommand.CommandText = InsertMonetProduct(product);
sqlCommand.ExecuteNonQuery(); // if this fails, the parent scope will roll everything back
scope2.Complete();
}
ts.Complete();
}
Background:
I am using sql statements to create a Temp database on a server which will store data until it is needed further by my client program.
Problem:
My sql statement to create the database works properly and creates the database with all the required specifications when run through Sql Management studio, on the other hand when my program executes the statement it only creates a database with the 'Default' settings except for the name.
Questions:
Why is this?
How can I make it create a database with my specifications
Sql statement:
CREATE DATABASE Temp ON PRIMARY(
NAME = Temp
, FILENAME = 'C:\Temp.mdf'
, SIZE = 2MB
, FILEGROWTH = 10%) LOG ON (
NAME = Temp_Log
, FILENAME = 'C:\Temp.ldf'
, SIZE = 1MB, MAXSIZE = 70MB
, FILEGROWTH = 10%)
Code:
public void AcuConvert()
{
using (DestD)
{
SqlCommand command = new SqlCommand();
DestD.Open();
command.Connection = DestD;
foreach (var item in Entity.SqlDestinationQueries.ToList())
{
command.CommandText = item.Query;
command.ExecuteNonQuery(); //This is where the command is run
}
foreach (var item in Entity.SystemQueries.ToList())
{
command.CommandText = item.Query.Replace("#Sys", SysD.Database);
command.ExecuteNonQuery();
}
foreach (var item in Entity.InsertQueries.ToList())
{
command.CommandText = item.Query.Replace("#Source", SourceD.Database); ;
command.ExecuteNonQuery();
}
}
}
Have you tried using SQL Server Management Objects instead of a raw SQL statement?
For example:
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Common;
...
// Connect to the default instance
Server server = new Server();
// Establish new database details
Database database = new Database(server, "MyTempDB");
// Add primary filegroup details
database.FileGroups.Add(new FileGroup(database, "PRIMARY"));
// Set Primary datafile properties
DataFile primaryFile = new DataFile(database.FileGroups["PRIMARY"],
"MyTempDB_Data", "C:\\MyTempDB.mdf");
primaryFile.Size = 2048; // Sizes are in KB
primaryFile.GrowthType = FileGrowthType.Percent;
primaryFile.Growth = 10;
// Add to the Primary filegroup
database.FileGroups["PRIMARY"].Files.Add(primaryFile);
// Define the log file
LogFile logfile = new LogFile(database, "MyTempDB_Log", "C:\\MyTempDB_Log.ldf");
logfile.Size = 1024;
logfile.GrowthType = FileGrowthType.Percent;
logfile.Growth = 10;
logfile.MaxSize = 70 * 1024;
// Add to the database
database.LogFiles.Add(logfile);
// Create
database.Create();
database.Refresh();
You can connect to the server with specific credentials, too, of course:
http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.server.aspx
If, however, you're stuck with using text scripts to create your database, I'd ensure that your Initial Catalog is set correctly (i.e. to 'master') in your connection string and your user has the necessary permissions - CREATE DATABASE / CREATE ANY DATABASE / ALTER DATABASE. If this still doesn't give you any joy, try stripping out the rest of your C# code and run the create SQL independently of the other statements - it could be that there's a side-effect from a preceding query. Use Profiler to see exactly what's running as you add them back in.
Edit:
I tested your script against a local SQL Server instance (2012 SqlLocalDB) via a small C# program using SqlClient and it ran just fine after changing the file paths to ones I had write access to (root of C is protected by default). The only other amendment was that the Primary size had to start at 3MB or more. Any smaller and the default tables could not be created. This may be another avenue of investigation for you to explore.
Your alternative option could be to use the Process class to run the sqlcmd.exe
Eg.
var process = Process.Start(WORKING_PATH, argument);
%WORKING_PATH% being "C:\tools\sql\sqlcmd.exe"
%argument% being "C:\scripts\Create.sql"
I use this strategy to dump test data into test environments when bootstrapping acceptance test fixtures.
Cheers