I have created multiple connections in npgsql to execute multiple queries as shown below code.
class TransactionAccess
{
private const string connString = "Host=localhost;Username=postgres;Password=1234;Database=ExpenseManagerDB";
public static void GetTransactions()
{
using (var connection = new NpgsqlConnection(connString))
{
var transactions = connection.Query<TransactionView>(#"SELECT t.transaction_id,t.account_id,a.account_name, a.type,t.note, t.amount, t.date
FROM account AS a
INNER JOIN transaction AS t ON a.account_id = t.account_id");
transactions.Dump();
}
}
public static void GetTransactionInfo(int id)
{
using (var connection = new NpgsqlConnection(connString))
{
var transactionInfo = connection.Query<TransactionView>(#"SELECT a.account_name, a.type, DATE(t.date), t.amount, t.note, t.transaction_id
FROM transaction AS t
INNER JOIN account AS a ON t.account_id = a.account_id
WHERE t.transaction_id = #id", new { id });
transactionInfo.Dump();
}
}
public static void MakeTransaction(Transaction transaction, Account account)
{
using (var connection = new NpgsqlConnection(connString))
{
connection.Execute(#"INSERT INTO transaction(account_id,amount,date,note)
SELECT a.account_id,#amount, #date, #note
FROM account AS a
WHERE a.account_name=#account_name", new { transaction.Amount, transaction.Date, transaction.Note, account.Account_Name });
}
}
}
I wanted to execute all queries with a single connection. How can I do that?
Why cannot you use Batching as mentioned in Npgsql documentation.
await using var batch = new NpgsqlBatch(conn)
{
BatchCommands =
{
new("INSERT INTO table (col1) VALUES ('foo')"),
new("SELECT * FROM table")
}
};
await using var reader = await cmd.ExecuteReaderAsync();
Source : https://www.npgsql.org/doc/basic-usage.html
PS : Thought of commenting first, but cannot do it because of less reputation points :D
Related
I am developing an ASP.NET Core MVC API to call resources in an Azure Cosmos DB. When I try to perform a GET for any specific ID, I receive DocumentClientException: Entity with the specified id does not exist in the system. I can confirm that the entity does exist in the system, and the connection is successful because I can successfully perform other methods and requests. The partition key is _id .
Debugging with breakpoints in Visual Studio, I can see where the correct ID is received at the API, but I can't confirm what specifically it is sending to Azure
The controller methods: (the ID field is a random string of numbers and text)
//controller is MoviesController decorated with [Route(api/[controller])]
//sample GET is to localhost:port/api/Movies/5ca6gdwndkna99
[HttpGet("{id}")]
public async Task<MoviesModel> Get(string id)
{
MoviesModel movie = await _persistence.GetMovieAsync(id);
return movie;
}
The data handling method:
public async Task<MoviesModel> GetMovieAsync(string Id)
{
string _id = Id;
RequestOptions options = new RequestOptions();
options.PartitionKey = new PartitionKey(_id);
var documentUri = UriFactory.CreateDocumentUri(_databaseId, "movies", Id);
Document result = await _client.ReadDocumentAsync(documentUri,options);
return (MoviesModel)(dynamic)result;
}
Other methods, like getting a list of all movies and returning to a table are working fine, so we can rule out network issues
public async Task<List<MoviesModel>> GetMoviesAsync()
{
var documentCollectionUri = UriFactory.CreateDocumentCollectionUri(_databaseId, "movies");
// build the query
var feedOptions = new FeedOptions() { EnableCrossPartitionQuery = true };
var query = _client.CreateDocumentQuery<MoviesModel>(documentCollectionUri, "SELECT * FROM movies", feedOptions);
var queryAll = query.AsDocumentQuery();
// combine the results
var results = new List<MoviesModel>();
while (queryAll.HasMoreResults)
{
results.AddRange(await queryAll.ExecuteNextAsync<MoviesModel>());
}
return results;
}
public async Task<List<GenresModel>> GetGenresAsync()
{
await EnsureSetupAsync();
var documentCollectionUri = UriFactory.CreateDocumentCollectionUri(_databaseId, "genres");
// build the query
var feedOptions = new FeedOptions() { EnableCrossPartitionQuery = true };
var query = _client.CreateDocumentQuery<GenresModel>(documentCollectionUri, "SELECT * FROM genres", feedOptions);
var queryAll = query.AsDocumentQuery();
// combine the results
var results = new List<GenresModel>();
while (queryAll.HasMoreResults)
{
results.AddRange(await queryAll.ExecuteNextAsync<GenresModel>());
}
return results;
}
Firstly, I would suggest to re-look at your cosmosDb design once, bcz of the following reasons...
Problems:
If your _id is random string of numbers and text, then its not good
to have the entire _id as your partition key, bcz this would create a
new partition for each entry.(although azure will range parition it
later)
Querying just by partition key is not efficient, for pin point
queries we should have both partition key and row key.
Solution:
Make the first one or two letters of your _id as your partition key. (so your partitions will be finite).
Make your _id as your row key.
If your _id = "abwed123asdf", then your query should be..
RequestOptions options = new RequestOptions();
options.PartitionKey = new PartitionKey(_id.Substring(0,1));
options.RowKey = _id;
This way, your look up will pin point to the exact required entry with the help of partition and row key. (saves lot of RUs)
Please refer docs for choosing a better partition keys for your needs https://learn.microsoft.com/en-us/azure/cosmos-db/partitioning-overview
I was able to get this to work by completely refactoring to the dotnet v3 SDK. My code for the solution is in the comments of the gitHub link:
using Microsoft.Azure.Cosmos;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using VidlyAsp.DataHandlers;
namespace VidlyAsp.DataHandlers
{
public class PersistenceNew
{
private static string _endpointUri;
private static string _primaryKey;
private CosmosClient cosmosClient;
private CosmosDatabase database;
private CosmosContainer movieContainer;
private CosmosContainer genreContainer;
private string containerId;
private string _databaseId;
public PersistenceNew(Uri endpointUri, string primaryKey)
{
_databaseId = "Vidly";
_endpointUri = endpointUri.ToString();
_primaryKey = primaryKey;
this.GetStartedAsync();
}
public async Task GetStartedAsync()
{
// Create a new instance of the Cosmos Client
this.cosmosClient = new CosmosClient(_endpointUri, _primaryKey);
database = await cosmosClient.Databases.CreateDatabaseIfNotExistsAsync(_databaseId);
CosmosContainer moviesContainer = await GetOrCreateContainerAsync(database, "movies");
CosmosContainer genresContainer = await GetOrCreateContainerAsync(database, "genres");
movieContainer = moviesContainer;
genreContainer = genresContainer;
}
public async Task<GenresModel> GetGenre(string id)
{
var sqlQueryText = ("SELECT * FROM c WHERE c._id = {0}", id).ToString();
var partitionKeyValue = id;
CosmosSqlQueryDefinition queryDefinition = new CosmosSqlQueryDefinition(sqlQueryText);
CosmosResultSetIterator<GenresModel> queryResultSetIterator = this.genreContainer.Items.CreateItemQuery<GenresModel>(queryDefinition, partitionKeyValue);
List<GenresModel> genres = new List<GenresModel>();
while (queryResultSetIterator.HasMoreResults)
{
CosmosQueryResponse<GenresModel> currentResultSet = await queryResultSetIterator.FetchNextSetAsync();
foreach (GenresModel genre in currentResultSet)
{
genres.Add(genre);
}
}
return genres.FirstOrDefault();
}
public async Task<MoviesModel> GetMovie(string id)
{
var sqlQueryText = "SELECT * FROM c WHERE c._id = '" + id + "'";
var partitionKeyValue = id;
CosmosSqlQueryDefinition queryDefinition = new CosmosSqlQueryDefinition(sqlQueryText);
CosmosResultSetIterator<MoviesModel> queryResultSetIterator = this.movieContainer.Items.CreateItemQuery<MoviesModel>(queryDefinition, partitionKeyValue);
List<MoviesModel> movies = new List<MoviesModel>();
while (queryResultSetIterator.HasMoreResults)
{
CosmosQueryResponse<MoviesModel> currentResultSet = await queryResultSetIterator.FetchNextSetAsync();
foreach (MoviesModel movie in currentResultSet)
{
movies.Add(movie);
}
}
return movies.FirstOrDefault();
}
/*
Run a query (using Azure Cosmos DB SQL syntax) against the container
*/
public async Task<List<MoviesModel>> GetAllMovies()
{
List<MoviesModel> movies = new List<MoviesModel>();
// SQL
CosmosResultSetIterator<MoviesModel> setIterator = movieContainer.Items.GetItemIterator<MoviesModel>(maxItemCount: 1);
while (setIterator.HasMoreResults)
{
foreach (MoviesModel item in await setIterator.FetchNextSetAsync())
{
movies.Add(item);
}
}
return movies;
}
public async Task<List<GenresModel>> GetAllGenres()
{
List<GenresModel> genres = new List<GenresModel>();
// SQL
CosmosResultSetIterator<GenresModel> setIterator = genreContainer.Items.GetItemIterator<GenresModel>(maxItemCount: 1);
while (setIterator.HasMoreResults)
{
foreach (GenresModel item in await setIterator.FetchNextSetAsync())
{
genres.Add(item);
}
}
return genres;
}
private static async Task<CosmosContainer> GetOrCreateContainerAsync(CosmosDatabase database, string containerId)
{
CosmosContainerSettings containerDefinition = new CosmosContainerSettings(id: containerId, partitionKeyPath: "/_id");
return await database.Containers.CreateContainerIfNotExistsAsync(
containerSettings: containerDefinition,
throughput: 400);
}
}
}
Can We prevent the following from loading more than once in my application. ie any other alternative than this?
public IEnumerable<User> users()
{
var users = Userlist();
return users.ToList();
}
public static List<User> Userlist()
{
string strSQL = "";
List<User> users = new List<User>();
strSQL = "select USERID,USERNAME,PASSWORD from USERS";
//if (Userlist().Count > 0)
//{
// return Userlist();
//}
//else
//{
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["conn"].ConnectionString))
{
using (var command = new SqlCommand(strSQL, connection))
{
connection.Open();
using (var dataReader = command.ExecuteReader())
{
while (dataReader.Read())
{
users.Add(new User { Id = Convert.ToInt32(dataReader["USERID"]), user = dataReader["USERNAME"].ToString(), password = Decrypt(dataReader["PASSWORD"].ToString()), estatus = true, RememberMe = true });
}
}
}
}
return users;
// }
}
I just wanted the solution to be like the commented part(which does not work here).
EDIT : I just wanted to avoid unnecessary database calls.
Thanks in Advance!
The usual trick is to lazily load them. You could just use a Lazy<T>, but a double-checked simple field works too:
static List<Foo> fetched;
static readonly object syncLock = new object(); // because: threading
public static List<Foo> Whatever {
get {
var tmp = fetched;
if(tmp != null) return tmp;
lock(syncLock) {
tmp = fetched;
if(tmp != null) return tmp; // double-checked lock
return fetched = GetTheActualData();
}
}
}
private static List<Foo> GetTheActualData() {...}
Additional thoughts:
storing passwords is never a good idea
List<T> is mutable; you should make sure people can't change the list or the items in the list if you are storing it statically
what do you do when the data changes at the database? how does it update?
You can also use caching for this.
The idea is that, the List<Users> will be cached, and any time when applications asks for the user list, we return it from the cache, and avoiding the database hit thereof.
A sample implementation could be something like this. Suggest to read more about caching, as there are many aspects that needs to taken care like, when the cache will expire, how it will get invalidate if new users are entered in database etc.
public List<User> Userlist()
{
ObjectCache cache = MemoryCache.Default;
var users = cache["users"];
if (users == null)
{
CacheItemPolicy policy = new CacheItemPolicy();
//For dmonstration, I used cache expring after 1 day
//Set the cache policy as per your need
policy.AbsoluteExpiration = DateTime.Now.AddDays(1);
// Fetch the users here from database
List<User> userList = GetUsersFromDatabase();
//Set the users in the cache
cache.Set("users", userList, policy);
}
return cache["users"] as List<User>;
}
private static List<User> GetUsersFromDatabase()
{
string strSQL = "";
List<User> users = new List<User>();
strSQL = "select USERID,USERNAME,PASSWORD from USERS";
//if (Userlist().Count > 0)
//{
// return Userlist();
//}
//else
//{
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["conn"].ConnectionString))
{
using (var command = new SqlCommand(strSQL, connection))
{
connection.Open();
using (var dataReader = command.ExecuteReader())
{
while (dataReader.Read())
{
users.Add(new User { Id = Convert.ToInt32(dataReader["USERID"]), user = dataReader["USERNAME"].ToString(), password = Decrypt(dataReader["PASSWORD"].ToString()), estatus = true, RememberMe = true });
}
}
}
}
return users;
}
Use Lazy, it is thread safe. Lazy
private Lazy<IEnumerable<User>> users = new Lazy<IEnumerable<User>>(Userlist);
public Lazy<IEnumerable<User>> Users
{
get
{
return this.users;
}
}
public static IEnumerable<User> Userlist()
{
string strSQL = "";
List<User> users = new List<User>();
strSQL = "select USERID,USERNAME,PASSWORD from USERS";
//if (Userlist().Count > 0)
//{
// return Userlist();
//}
//else
//{
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["conn"].ConnectionString))
{
using (var command = new SqlCommand(strSQL, connection))
{
connection.Open();
using (var dataReader = command.ExecuteReader())
{
while (dataReader.Read())
{
users.Add(new User { Id = Convert.ToInt32(dataReader["USERID"]), user = dataReader["USERNAME"].ToString(), password = Decrypt(dataReader["PASSWORD"].ToString()), estatus = true, RememberMe = true });
}
}
}
}
return users;
// }
}
have look at below codes
Here is my _cSynchronization Class where Sync Function are precent,
(500) in connection string means timeout = 500
public static class _cSynchronization
{
public static int transactionCount;
public static uint BatchSize = 10000;
public static uint MemorySize = 20000;
public static List<string> _MGetAllTableList()
{
List<string> list = new List<string>();
DataRowCollection _dr = _CObjectsofClasses._obj_CDatabase._MGetDataRows("Select TABLE_NAME From INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME <> N'AUTOBACKUPSET' AND TABLE_NAME <> N'BINDATA' AND TABLE_NAME <> N'_ATTENDANCESTATUS' AND TABLE_NAME NOT like '%_tracking%' AND TABLE_TYPE ='BASE TABLE' AND TABLE_NAME <> N'schema_info' AND TABLE_NAME <> N'scope_info' AND TABLE_NAME <> N'scope_config' AND TABLE_NAME <> '_CLIENTNAME' AND TABLE_NAME <> '_TABSETTING' AND TABLE_NAME <> '_EMPLOYEEPAYMENT1' AND TABLE_NAME <> '_LOCALCOMPANYINFO' ORDER BY TABLE_NAME");
int a = 0;
string x = "";
if (_dr.Count > 0)
{
_CPubVar._value_I = 0;
_CPubVar._MaxValue_I = _dr.Count + 2;
_CPubVar._IsTableProcess_bool = true;
foreach (DataRow _row in _dr)
{
_CPubVar._value_I++;
_CPubVar._ProcessText_S = "Preparing Tables " + _CPubVar._value_I + " of " + _CPubVar._MaxValue_I;
x = _CObjectsofClasses._obj_CConvert._MConvertToString(_row[0]);
// serverConn.Open();
list.Add(x);
}
}
return list;
}
public static void SetUp(string _pTableName)
{
// Connection to SQL Server database
SqlConnection serverConn = new SqlConnection(_CObjectsofClasses._obj_CConnectionString._MGetServerConnectionString(500));
// Connection to SQL client database
SqlConnection clientConn = new SqlConnection(_CObjectsofClasses._obj_CConnectionString._MGetConnectionString(500));
// Create a scope named "product" and add tables to it.
DbSyncScopeDescription productScope = new DbSyncScopeDescription(_pTableName + "_SCOP");
// Select the colums to be included in the Collection Object
// Define the Products table.
DbSyncTableDescription productDescription =
SqlSyncDescriptionBuilder.GetDescriptionForTable(_pTableName,serverConn);
// Add the Table to the scope object.
productScope.Tables.Add(productDescription);
// Create a provisioning object for "product" and apply it to the on-premise database if one does not exist.
SqlSyncScopeProvisioning serverProvision = new SqlSyncScopeProvisioning(serverConn, productScope);
serverProvision.ObjectSchema = ".dbo";
//
serverProvision.SetCreateProceduresForAdditionalScopeDefault(DbSyncCreationOption.Create);
serverProvision.SetCreateTableDefault(DbSyncCreationOption.Skip);
serverProvision.SetCreateProceduresDefault(DbSyncCreationOption.CreateOrUseExisting);
serverProvision.SetCreateTrackingTableDefault(DbSyncCreationOption.CreateOrUseExisting);
serverProvision.SetCreateTriggersDefault(DbSyncCreationOption.CreateOrUseExisting);
if (!serverProvision.ScopeExists(_pTableName + "_SCOP"))
serverProvision.Apply();
// Provision the SQL client database from the on-premise SQL Server database if one does not exist.
SqlSyncScopeProvisioning clientProvision = new SqlSyncScopeProvisioning(clientConn, productScope);
if (!clientProvision.ScopeExists(_pTableName + "_SCOP"))
clientProvision.Apply();
// Shut down database connections.
serverConn.Close();
serverConn.Dispose();
clientConn.Close();
clientConn.Dispose();
}
public static List<_CSyncDetails> Synchronize(string _pScopeName, SyncDirectionOrder _pDirection)
{
// Connection to SQL Server database
SqlConnection serverConn = new SqlConnection(_CObjectsofClasses._obj_CConnectionString._MGetServerConnectionString(500));
// Connection to SQL client database
SqlConnection clientConn = new SqlConnection(_CObjectsofClasses._obj_CConnectionString._MGetConnectionString(500));
List<_CSyncDetails> _Statics = new List<_CSyncDetails>();
// Perform Synchronization between SQL Server and the SQL client.
SyncOrchestrator syncOrchestrator = new SyncOrchestrator();
// Create provider for SQL Server
SqlSyncProvider serverProvider = new SqlSyncProvider(_pScopeName, serverConn);
// Set the command timeout and maximum transaction size for the SQL Azure provider.
SqlSyncProvider clientProvider = new SqlSyncProvider(_pScopeName, clientConn);
clientProvider.CommandTimeout = serverProvider.CommandTimeout = 500;
//Set memory allocation to the database providers
clientProvider.MemoryDataCacheSize = serverProvider.MemoryDataCacheSize = MemorySize;
//Set application transaction size on destination provider.
serverProvider.ApplicationTransactionSize = BatchSize;
//Count transactions
serverProvider.ChangesApplied += new EventHandler<DbChangesAppliedEventArgs>(RemoteProvider_ChangesApplied);
// Set Local provider of SyncOrchestrator to the server provider
syncOrchestrator.LocalProvider = serverProvider;
// Set Remote provider of SyncOrchestrator to the client provider
syncOrchestrator.RemoteProvider = clientProvider;
// Set the direction of SyncOrchestrator session to Upload and Download
syncOrchestrator.Direction = _pDirection;
// Create SyncOperations Statistics Object
SyncOperationStatistics syncStats = syncOrchestrator.Synchronize();
_Statics.Add(new _CSyncDetails { UploadChangesTotal = syncStats.UploadChangesTotal, SyncStartTime = syncStats.SyncStartTime, DownloadChangesTotal = syncStats.DownloadChangesTotal, SyncEndTime = syncStats.SyncEndTime });
// Display the Statistics
// Shut down database connections.
serverConn.Close();
serverConn.Dispose();
clientConn.Close();
clientConn.Dispose();
return _Statics;
}
}
Here the function where I am sync
private void _MSync()
{
_CPubVar._IsContinue = true;
_CPubVar._PausebtnCondition = 0;
// _cSynchronization._MClearSyncprovision();
_CPubVar._Stop_bool = false;
this.Text += " - Started at : " + DateTime.Now;
string a = "";
// Define the Products table.
List<string> _Tablelist = new List<string>();
Collection<string> _ColNames = new Collection<string>();
_list1.Add(new _CSyncDetails { SyncStartTime = DateTime.Now });
_Tablelist.AddRange(_cSynchronization._MGetAllTableList());
SyncDirectionOrder _order = SyncDirectionOrder.Download;
_CPubVar._MaxValue_I = (_Tablelist.Count * 2);
_CPubVar._value_I = 0;
foreach (string tbl in _Tablelist)
{
try
{
a = Regex.Replace(Environment.MachineName + Environment.UserName, #"[^0-9a-zA-Z]+", "").ToUpper() + "_" + tbl + "_SCOPE";
_CPubVar._value_I++;
_CPubVar._ProcessText_S = "Sync Tables " + _CPubVar._value_I + " of " + _CPubVar._MaxValue_I;
_cSynchronization.SetUp(tbl);
if (_CPubVar._IsServerRunning_bool)
{
_order = SyncDirectionOrder.DownloadAndUpload;
}
else
{
if (tbl == "_BANK" || tbl == "_BANKACCOUNT" || tbl == "_CLIENTNAME" || tbl == "_PACKAGE" || tbl == "_PACKAGEDET" || tbl == "_PAYMENTEXPENCES" || tbl == "_PROJECT" || tbl == "_PROJECTDET" || tbl == "_REQUIREMENT" || tbl == "_REQUIREMENTDET" || tbl == "_SERVER" || tbl == "_UNIT" || tbl == "_ITEM" || tbl == "ManageUser" || tbl == "USERPERMISSION" || tbl == "USERROLE" || tbl == "USERROLEDET")
{
_order = SyncDirectionOrder.DownloadAndUpload;
}
else
{
_order = SyncDirectionOrder.Download;
}
}
_CPubVar._value_I++;
_CPubVar._ProcessText_S = "Sync Tables " + _CPubVar._value_I + " of " + _CPubVar._MaxValue_I;
if (tbl != "_COMPANYINFO")
{
_list1.AddRange(_cSynchronization.Synchronize(tbl + "_SCOP", _order));
}
else
{
if (_CPubVar._IsServerRunning_bool)
{
_list1.AddRange(_cSynchronization.Synchronize(tbl + "_SCOP", SyncDirectionOrder.DownloadAndUpload));
}
}
}
catch (Exception exx)
{
_syncErr.Add(new _CSyncErrors { SyncErrorDateTime = DateTime.Now, SyncErrorMessage = exx.Message, SyncTableAlies = _CTableName._MgetTableAlies(tbl) });
pictureBox1.Visible = label3.Visible = true;
label3.Text = _syncErr.Count.ToString();
Application.DoEvents();
continue;
}
}
thread.Abort();
}
Problem :
Above codes are working fine for only one PC Sync at a time (let Take A) there is no error and done.
one PC Sync at a time (let Take B) there is no error and done.
But when I am trying to run Application simultaneously (A and B Same time) then for some table I am getting
Cannot enumerate changes at the RelationalSyncProvider for table 'TableName'
Running Status
PC A PC B Result
YES NO No Error
NO YES No Error
YES YES Error
Please Note That On Client side MY database is 2008 and Server side MY database is 2012
Where I am wrong
UPDATE :
I have 72 tables in database and below Error is not specific 'TableName' it may be any table from 72,
For Example Table1 gives me Error, and after Sync done if I rerun application may be this error not come.
Cannot enumerate changes at the RelationalSyncProvider for table 'TableName'
Check the Timeout
Eight minutes might not be enough time. Try increasing the synchronization command timeout to an absurd number to find out if that's the problem.
clientProvider.CommandTimeout = 3000;
serverProvider.CommandTimeout = 3000;
Turn on Tracing
Edit the app.config file for your application by adding the following system.diagnostics segment. It will log verbosely to C:\MySyncTrace.txt.
<configuration>
<system.diagnostics>
<switches>
<!--4-verbose.-->
<add name="SyncTracer" value="4" />
</switches>
<trace autoflush="true">
<listeners>
<add name="TestListener"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="c:\MySyncTrace.txt"/>
</listeners>
</trace>
</system.diagnostics>
</configuration>
My Recreation
I tried to recreate the error that you are experiencing. I created a simplified version of what you are trying to accomplish. It builds and successfully synchronizes two SqlExpress databases.
Unfortunately, I wasn't able to recreate the error. Here is the setup that I used and the test cases afterwards.
Mock Databases Create Script
USE Master;
GO
IF EXISTS (
SELECT *
FROM sys.databases
WHERE NAME = 'SyncTestServer'
)
DROP DATABASE SyncTestServer
GO
CREATE DATABASE SyncTestServer;
GO
CREATE TABLE SyncTestServer.dbo.Table1 (Column1 VARCHAR(50) PRIMARY KEY)
CREATE TABLE SyncTestServer.dbo.Table2 (Column1 VARCHAR(50) PRIMARY KEY)
INSERT INTO SyncTestServer.dbo.Table1 (Column1)
VALUES ('Server Data in Table1')
INSERT INTO SyncTestServer.dbo.Table2 (Column1)
VALUES ('Server Data in Table2')
IF EXISTS (
SELECT *
FROM sys.databases
WHERE NAME = 'SyncTestClient'
)
DROP DATABASE SyncTestClient
GO
CREATE DATABASE SyncTestClient;
GO
CREATE TABLE SyncTestClient.dbo.Table1 (Column1 VARCHAR(50) PRIMARY KEY)
CREATE TABLE SyncTestClient.dbo.Table2 (Column1 VARCHAR(50) PRIMARY KEY)
INSERT INTO SyncTestClient.dbo.Table1 (Column1)
VALUES ('Client Data in Table1')
INSERT INTO SyncTestClient.dbo.Table2 (Column1)
VALUES ('Client Data in Table2')
Mock Console Application
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
using Microsoft.Synchronization.Data;
using Microsoft.Synchronization.Data.SqlServer;
using Microsoft.Synchronization;
namespace StackOverflow_SyncFramework
{
public class _CSyncDetails
{
public int UploadChangesTotal;
public DateTime SyncStartTime;
public int DownloadChangesTotal;
public DateTime SyncEndTime;
}
public class Program
{
static void Main(string[] args)
{
_cSynchronization sync = new _cSynchronization();
sync._MSync();
Console.ReadLine();
}
}
public class _cSynchronization
{
public static int transactionCount;
public static uint BatchSize = 10000;
public static uint MemorySize = 20000;
public const string ServerConnString =
#"Data Source=.\SQLExpress;initial catalog=SyncTestServer;integrated security=True;MultipleActiveResultSets=True;";
public const string ClientConnString =
#"Data Source=.\SQLExpress;initial catalog=SyncTestClient;integrated security=True;MultipleActiveResultSets=True;";
public static List<string> _MGetAllTableList()
{
// I just created two databases that each have the following table
// Synchronization is working
List<string> list = new List<string>()
{
"Table1",
"Table2"
};
return list;
}
public static void SetUp(string _pTableName)
{
// Connection to SQL Server database
SqlConnection serverConn =
new SqlConnection(ServerConnString);
// Connection to SQL client database
SqlConnection clientConn =
new SqlConnection(ClientConnString);
// Create a scope named "product" and add tables to it.
Console.WriteLine(_pTableName);
DbSyncScopeDescription productScope = new DbSyncScopeDescription(_pTableName + "_SCOP");
// Define the Products table.
DbSyncTableDescription productDescription =
SqlSyncDescriptionBuilder.GetDescriptionForTable(_pTableName, serverConn);
// Add the Table to the scope object.
productScope.Tables.Add(productDescription);
// Create a provisioning object for "product" and apply it to the on-premise database if one does not exist.
SqlSyncScopeProvisioning serverProvision = new SqlSyncScopeProvisioning(serverConn, productScope);
serverProvision.ObjectSchema = ".dbo";
serverProvision.SetCreateProceduresForAdditionalScopeDefault(DbSyncCreationOption.Create);
serverProvision.SetCreateTableDefault(DbSyncCreationOption.Skip);
serverProvision.SetCreateProceduresDefault(DbSyncCreationOption.CreateOrUseExisting);
serverProvision.SetCreateTrackingTableDefault(DbSyncCreationOption.CreateOrUseExisting);
serverProvision.SetCreateTriggersDefault(DbSyncCreationOption.CreateOrUseExisting);
if (!serverProvision.ScopeExists(_pTableName + "_SCOP"))
serverProvision.Apply();
// Provision the SQL client database from the on-premise SQL Server database if one does not exist.
SqlSyncScopeProvisioning clientProvision = new SqlSyncScopeProvisioning(clientConn, productScope);
if (!clientProvision.ScopeExists(_pTableName + "_SCOP"))
clientProvision.Apply();
// Shut down database connections.
serverConn.Close();
serverConn.Dispose();
clientConn.Close();
clientConn.Dispose();
}
public static List<_CSyncDetails> Synchronize(string _pScopeName, SyncDirectionOrder _pDirection)
{
// Connection to SQL Server database
SqlConnection serverConn =
new SqlConnection(ServerConnString);
// Connection to SQL client database
SqlConnection clientConn =
new SqlConnection(ClientConnString);
List<_CSyncDetails> _Statics = new List<_CSyncDetails>();
// Perform Synchronization between SQL Server and the SQL client.
SyncOrchestrator syncOrchestrator = new SyncOrchestrator();
// Create provider for SQL Server
SqlSyncProvider serverProvider = new SqlSyncProvider(_pScopeName, serverConn);
// Set the command timeout and maximum transaction size for the SQL Azure provider.
SqlSyncProvider clientProvider = new SqlSyncProvider(_pScopeName, clientConn);
clientProvider.CommandTimeout = serverProvider.CommandTimeout = 500;
//Set memory allocation to the database providers
clientProvider.MemoryDataCacheSize = serverProvider.MemoryDataCacheSize = MemorySize;
//Set application transaction size on destination provider.
serverProvider.ApplicationTransactionSize = BatchSize;
//Count transactions
serverProvider.ChangesApplied +=
new EventHandler<DbChangesAppliedEventArgs>(RemoteProvider_ChangesApplied);
// Set Local provider of SyncOrchestrator to the server provider
syncOrchestrator.LocalProvider = serverProvider;
// Set Remote provider of SyncOrchestrator to the client provider
syncOrchestrator.RemoteProvider = clientProvider;
// Set the direction of SyncOrchestrator session to Upload and Download
syncOrchestrator.Direction = _pDirection;
// Create SyncOperations Statistics Object
SyncOperationStatistics syncStats = syncOrchestrator.Synchronize();
_Statics.Add(new _CSyncDetails
{
UploadChangesTotal = syncStats.UploadChangesTotal,
SyncStartTime = syncStats.SyncStartTime,
DownloadChangesTotal = syncStats.DownloadChangesTotal,
SyncEndTime = syncStats.SyncEndTime
});
// Shut down database connections.
serverConn.Close();
serverConn.Dispose();
clientConn.Close();
clientConn.Dispose();
return _Statics;
}
private static void RemoteProvider_ChangesApplied(object sender, DbChangesAppliedEventArgs e)
{
Console.WriteLine("Changes Applied");
}
public void _MSync()
{
// Define the Products table.
List<string> _Tablelist = new List<string>();
_Tablelist.AddRange(_cSynchronization._MGetAllTableList());
foreach (string tbl in _Tablelist)
{
SetUp(tbl);
_cSynchronization.Synchronize(tbl + "_SCOP", SyncDirectionOrder.DownloadAndUpload);
}
}
}
}
Select Statements for Before and After Tests
SELECT *
FROM SyncTestServer.dbo.Table1
SELECT *
FROM SyncTestServer.dbo.Table2
SELECT *
FROM SyncTestClient.dbo.Table1
SELECT *
FROM SyncTestClient.dbo.Table2
Before the Sync
This was the DB state before the sync.
After Sync
This was the state afterward. So, the sync appears to have worked.
Reattempted with a Remove Server and Three Concurrent Syncs
This is the connection string for the remote DB.
public const string ServerConnString =
#"data source=x0x0x0x0x0x.database.windows.net,1433;initial catalog=SyncTestServer01;user id=xoxoxox#xoxoxox;password=xoxoxoxoxox;MultipleActiveResultSets=True;";
This is the modification to mimic three concurrent synchronizations.
public class Program
{
public static string[] ClientConnString = new string[]
{
#"Data Source=.\SQLExpress;initial catalog=SyncTestClient01;integrated security=True;MultipleActiveResultSets=True;"
,#"Data Source=.\SQLExpress;initial catalog=SyncTestClient02;integrated security=True;MultipleActiveResultSets=True;"
,#"Data Source=.\SQLExpress;initial catalog=SyncTestClient03;integrated security=True;MultipleActiveResultSets=True;"
};
static void Main(string[] args)
{
foreach (var connString in ClientConnString)
{
Action action = () =>
{
_cSynchronization sync = new _cSynchronization();
sync._MSync(connString);
};
Task.Factory.StartNew(action);
}
Console.ReadLine();
}
}
I'm afraid I wasn't able to recreate the error that you experienced. Please turn on tracing and then post the tracing results to your questions. That way we can analyze the trace and see what's problematic.
I am using TransactionScope in my repository unit tests to rollback any changes made by tests.
Setup and teardown procedures for tests look like this:
[TestFixture]
public class DeviceRepositoryTests {
private static readonly string ConnectionString =
ConfigurationManager.ConnectionStrings["TestDB"].ConnectionString;
private TransactionScope transaction;
private DeviceRepository repository;
[SetUp]
public void SetUp() {
transaction = new TransactionScope(TransactionScopeOption.Required);
repository = new DeviceRepository(ConnectionString);
}
[TearDown]
public void TearDown() {
transaction.Dispose();
}
}
Problematic test consists of code which inserts records to database and CUT that retrieves those records.
[Test]
public async void GetAll_DeviceHasSensors_ReturnsDevicesWithSensors() {
int device1Id = AddDevice();
AddSensor();
var devices = await repository.GetAllAsync();
// Asserts
}
AddDevice and AddSensor methods open sql connection and insert a row into a database:
private int AddDevice() {
var sqlString = "<SQL>";
using (var connection = CreateConnection())
using (var command = new SqlCommand(sqlString, connection)) {
var insertedId = command.ExecuteScalar();
Assert.AreNotEqual(0, insertedId);
return (int) insertedId;
}
}
private void AddSensor() {
const string sqlString = "<SQL>";
using (var connection = CreateConnection())
using (var command = new SqlCommand(sqlString, connection)) {
var rowsAffected = command.ExecuteNonQuery();
Assert.AreEqual(1, rowsAffected);
}
}
private SqlConnection CreateConnection() {
var result = new SqlConnection(ConnectionString);
result.Open();
return result;
}
GetAllAsync method opens a connection, executes query, and for each fetched row opens new connection to fetch child objects.
public class DeviceRepository {
private readonly string connectionString;
public DeviceRepository(string connectionString) {
this.connectionString = connectionString;
}
public async Task<List<Device>> GetAllAsync() {
var result = new List<Device>();
const string sql = "<SQL>";
using (var connection = await CreateConnection())
using (var command = GetCommand(sql, connection, null))
using (var reader = await command.ExecuteReaderAsync()) {
while (await reader.ReadAsync()) {
var device = new Device {
Id = reader.GetInt32(reader.GetOrdinal("id"))
};
device.Sensors = await GetSensors(device.Id);
result.Add(device);
}
}
return result;
}
private async Task<List<Sensor>> GetSensors(int deviceId) {
var result = new List<Sensor>();
const string sql = "<SQL>";
using (var connection = await CreateConnection())
using (var command = GetCommand(sql, connection, null))
using (var reader = await command.ExecuteReaderAsync()) {
while (await reader.ReadAsync()) {
// Fetch row and add object to result
}
}
return result;
}
private async Task<SqlConnection> CreateConnection() {
var connection = new SqlConnection(connectionString);
await connection.OpenAsync();
return connection;
}
}
The problem is that when GetSensors method calls SqlConnection.Open I get following exception:
System.Transactions.TransactionAbortedException : The transaction has aborted.
----> System.Transactions.TransactionPromotionException : Failure while attempting to promote transaction.
----> System.Data.SqlClient.SqlException : There is already an open DataReader associated with this Command which must be closed first.
----> System.ComponentModel.Win32Exception : The wait operation timed out
I could move code that fetches child object out of the first connection scope (this would work), but let's say I don't want to.
Does this exception mean that it is impossible to open simultaneous connections to DB inside single TransactionScope?
Edit
GetCommand just calls SqlCommand contructor and do some logging.
private static SqlCommand GetCommand(string sql, SqlConnection connection, SqlParameter[] parameters) {
LogSql(sql);
var command = new SqlCommand(sql, connection);
if (parameters != null)
command.Parameters.AddRange(parameters);
return command;
}
The issue is that two DataReader objects can't be open at the same time against the database (unless MARS is enabled). This restriction is by design. As I see it you have a few options:
Enable MARS on your connection string; add this MultipleActiveResultSets=True
Don't use the DataReader if it's really not necessary. But the way you've got your code written, it's pretty necessary.
Populate the Sensor property after loading the devices.
Use Dapper, it can do all of this (including populate the Sensor) and likely faster.
Using Dapper you could do something like this (and you wouldn't need GetSensors):
public async Task<List<Device>> GetAllAsync() {
var result = new List<Device>();
const string sql = "<SQL>";
using (var connection = await CreateConnection())
using (var multi = connection.QueryMultiple(sql, parms)) {
result = multi.Read<Device>().ToList();
var sensors = multi.Read<Sensors>().ToList();
result.ForEach(device => device.Sensors =
sensors.Where(s => s.DeviceId = device.Id).ToList());
}
return result;
}
Here your sql would look like this:
SELECT * FROM Devices
SELECT * FROM Sensors
See the Multi Mapping documentation for Dapper.
I need to access data from two different DbContexts at the same time, making sure each uses READ UNCOMMITTED for their queries (really, the important thing is that it doesn't lock the rows it iterates over - adding WITH NO LOCK to the query would work too).
How can you do this using Entity Framework? If I wrap each of the two queries in a TransactionScope, it tries to promote the transaction to MSDTC which isn't an option for us.
private static IEnumerable<Image> EnumerateSourceImages()
{
using (var dbContext = new SourceDbContext())
{
using (var transScope = new TransactionScope(
TransactionScopeOption.RequiresNew,
new TransactionOptions() {
IsolationLevel = IsolationLevel.ReadUncommitted
}
)
)
{
var imagesSourceQuery = dbContext.ImageDatas
.AsNoTracking()
.OrderBy(imageData => imageData.ImageID)
foreach (var image in imagesSourceQuery)
{
yield return image;
}
transScope.Complete();
}
}
}
private static IEnumerable<Image> EnumerateDestinationImages()
{
using (var dbContext = new DestinationDbContext())
{
using (var transScope = new TransactionScope(
TransactionScopeOption.RequiresNew,
new TransactionOptions() {
IsolationLevel = IsolationLevel.ReadUncommitted
}
)
)
{
var imagesSourceQuery = dbContext.ImageDatas
.AsNoTracking()
.OrderBy(imageData => imageData.ImageID)
foreach (var image in imagesSourceQuery)
{
yield return image;
}
transScope.Complete();
}
}
}
private static void main(string[] args){
{
IEnumerator<ItemImage> sourceImagesEnumerator = null;
IEnumerator<ItemImage> destImagesEnumerator = null;
try{
sourceImagesEnumerator = EnumerateSourceImages().GetEnumerator();
destImagesEnumerator = EnumerateDestinationImages().GetEnumerator();
bool sourceHasMore = sourceImagesEnumerator.MoveNext();
//Exception on next line about MSDTC Promotion
bool destHasMore = destImagesEnumerator.MoveNext();
} finally{
if(sourceImagesEnumerator != null) sourceImagesEnumerator.Dispose();
if(destImagesEnumerator != null) destImagesEnumerator.Dispose():
}
}
Did you try setting enlist=false in your connection string?
http://forums.asp.net/t/1401606.aspx/1
Brgrds,
Lari