I am building an application with c# and I decided to use the Enterprise Library for the DAL (SQL Server).
I don't remember where, but I had read an article about EntLib which said that the connections are closed automatically.
Is it true?
If not, what is the best approach of managing the connections in the middle layer?
Open and close in each method?
The above is a sample method of how I am using the EntLib
public DataSet ReturnSomething
{
var sqlStr = "select something";
DbCommand cmd = db.GetSqlStringCommand(sqlStr);
db.AddInParameter(cmd, "#param1", SqlDbType.BigInt, hotelID);
db.AddInParameter(cmd, "#param2", SqlDbType.NVarChar, date);
return db.ExecuteDataSet(cmd);
}
Thanks in advance.
the ExecuteDataSet method returns a DataSet object that contains all the data. This gives you your own local copy. The call to ExecuteDataSet opens a connection, populates a DataSet, and closes the connection before returning the result
for more info:
http://msdn.microsoft.com/en-us/library/ff648933.aspx
I think you should have something like a static class used as a Façade which would provide the correct connection for your library subsystems.
public static class SystemFacade {
// Used as a subsystem to which the connections are provided.
private static readonly SystemFactory _systemFactory = new SystemFactory();
public static IList<Customer> GetCustomers() {
using (var connection = OpenConnection(nameOfEntLibNamedConnection))
return _systemFactory.GetCustomers(connection);
}
public static DbConnection OpenConnection(string connectionName) {
var connection =
// Read EntLib config and create a new connection here, and assure
// it is opened before you return it.
if (connection.State == ConnectionState.Closed)
connection.Open();
return connection;
}
}
internal class SystemFactory {
internal IList<Customer> GetCustomers(DbConnection connection) {
// Place code to get customers here.
}
}
And using this code:
public class MyPageClass {
private void DisplayCustomers() {
GridView.DataSource = SystemFacade.GetCustomers();
}
}
In this code sample, you have a static class that provides the functionalities and features of a class library. The Façade class is used to provide the user with all possible action, but you don't want to get a headache with what connection to use, etc. All you want is the list of customers out of the underlying datastore. Then, a call to GetCustomers will do it.
The Façade is an "intelligent" class that knows where to get the information from, so creates the connection accordingly and order the customers from the subsystem factory. The factory does what it is asked for, take the available connection and retrieve the customers without asking any further questions.
Does this help?
Yes, EntLib closes connections for you (actually it releases them back into the connection pool). That is the main reason why we originally started to use EntLib.
However, for all new development we have now gone on to use Entity Framework, we find that much more productive.
Related
I am overthinking this and can't find the right terms to search on in Google.
I have a website that connects to a SQL Server database with mostly centralized classes to call out to the database.
I was able to copy/paste those classes and tweak them to connect to a postgres db by simple changes like changing:
This for connections
SqlConnection connection = new SqlConnection(ConnectionManagerToUse);
PgSqlConnection connection = new PgSqlConnection(ConnectionManagerToUse);
and this for commands:
SqlCommand SelectDataCommand = new SqlCommand();
PgSqlCommand SelectDataCommandPostgres = new PgSqlCommand();
This worked great so far by changing all instances of the above. I want to see if there is a way to have a class or something that is middle man that I can call to change between these 2 in the code so I only have to change code in this middle function to change between Postgres and SQL Server and not everywhere I call out to these commands in my code (obviously specific SQL calls and SQL syntax would be different but I am looking for ways to update the underlying code calling out to the database, not the actual SQL syntax).
Something like the below. I would call this instead of the above (I know this does not work this is just for example) then if I want to change database technology I just change it with a single variable or something in this centralized class/config files (or even have ability to connect to 2 different databases with different technologies at once.
For connections example:
public T GetConnection<T>()
{
if (_DBTechnology == "MSSQL")
{
return new SqlConnection(_connectionStringMSSQL);
}
else if (_DBTechnology == "POSTGRES")
{
return new PgSqlConnection(_connectionStringPostgres));
}
}
I know the above is not valid and I have only used generics a few times so this may not be the way to do this at all. So any suggestions would be helpful.
I was able to come up with solution for this. Below is the basic code to do what I was looking to do. I am working on expanding for more versitiality and usability for multile scenerios but this covers the basics.
This works because the classes/methods for both SQL server and Postgres work similar/the same.
The drivers I used for the Postgres were Devart.Data ande Devart.Data.PostgresSql (found through Nuget)
// set variable for db to use. Classes use MS for sql server, and PG for Postgres
string DBToUse = "MS"; //"PG" for Postgres
// Call to get the empty connection
DbConnection connection = GetConnection(DBToUse)
//Call the class to create empty connection:
DbCommand SelectDataCommandToBuild = GetCommand(DBToUse);
// here are classes to call to get the correct DB connections
public static DbConnection GetConnection(string DBToUse)
{
DbConnection connection = null;
if (DBToUse == "MS")
{
connection = new SqlConnection(_connectionStringMSSQL);
}
else if (DBToUse == "PG")
{
connection = new PgSqlConnection(_connectionStringPostgres);
}
// default just return null
return connection;
}// generic SQL Connection
public static DbCommand GetCommand(string DBToUse)
{
DbCommand command = null;
if (DBToUse == "MS")
{
command = new SqlCommand();
}
else if (DBToUse == "PG")
{
command = new PgSqlCommand();
}
// default just return null
return command;
}// generic SQL Command
I am writing SQL Server integration tests using xUnit.
My tests look like this:
[Fact]
public void Test()
{
using (IDbConnection connection = new SqlConnection(_connectionString))
{
connection.Open();
connection.Query(#$"...");
//DO SOMETHING
}
}
Since I am planning to create multiple tests in the same class I was trying to avoid creating a new SqlConnection every time. I know that xUnit framework per se creates an instance of the class per each running test inside it.
This means I could make the creation of the SqlConnection in the constructor since anyway a new one will be created every time a new test run.
The problem is disposing it. Is it good practice or do you see any problem in disposing manually the SqlConnection at the end of each test?
Such as:
public class MyTestClass
{
private const string _connectionString = "...";
private readonly IDbConnection _connection;
public MyTestClass()
{
_connection = new SqlConnection(_connectionString));
}
[Fact]
public void Test()
{
_connection.Open();
_connection.Query(#$"...");
//DO SOMETHING
_connection.Dispose();
}
}
I was trying to avoid creating a new SqlConnection every time.
It's okay to create and dispose a new SqlConnection over and over. The connection instance is disposed, but the underlying connection is pooled. Under the hood it may actually use the same connection repeatedly without closing it, and that's okay.
See SQL Server Connection Pooling.
So if that's your concern, don't worry. What you are doing in the first example is completely harmless. And however your code is structured, if you're creating a new connection when you need it, using it, and disposing it, that's fine. You can do that all day long.
If what concerns you is the repetitive code, I find this helpful. I often have a class like this in my unit tests:
static class SqlExecution
{
public static void ExecuteSql(string sql)
{
using (var connection = new SqlConnection(GetConnectionString()))
{
using (var command = new SqlCommand(sql, connection))
{
connection.Open();
command.ExecuteNonQuery();
}
}
}
public static T ExecuteScalar<T>(string sql)
{
using (var connection = new SqlConnection(GetConnectionString()))
{
using (var command = new SqlCommand(sql, connection))
{
connection.Open();
return (T)command.ExecuteScalar();
}
}
}
public static string GetConnectionString()
{
// This may vary
}
}
How you obtain the connection string may vary, which is why I left that method empty. It might be a configuration file, or it could be hard-coded.
This covers the common scenarios of executing something and retrieving a value, and allows me to keep all the repetitive database code out of my tests. Within the test is just one line:
SqlExecution.ExecuteSql("UPDATE WHATEVER");
You can also use dependency injection with xUnit, so you could write something as an instance class and inject it. But I doubt that it's worth the effort.
If I find myself writing more and more repetitive code then I might add test-specific methods. For example the test class might have a method that executes a certain stored procedure or formats arguments into a query and executes it.
You can also do dependency injection with xUnit so that dependencies are injected into the class like with any other class. There are lots of answers about this. You may find it useful. Maybe it's the reason why you're using xUnit. But something simpler might get the job done.
Someone is bound to say that unit tests shouldn't talk to the database. And they're mostly right. But it doesn't matter. Sometimes we have to do it anyway. Maybe the code we need
to unit test is a stored procedure, and executing it from a unit test and verifying the
results is the simplest way to do that.
Someone else might say that we shouldn't have logic in the database that needs testing, and I agree with them too. But we don't always have that choice. If I have to put logic in SQL I still want to test it, and writing a unit test is usually the easiest way. (We can call it an "integration test" if it makes anyone happy.)
There's a couple different things at play here. First, SqlConnection uses connection pooling so it should be fine to create/dispose the connections within a single test.
Having said that disposing of the connections on a per test class basis would also be doable. XUnit will dispose of test classes that are IDisposable. So either would work. I'd suggest it's cleaner to create/dispose them within the test.
public class MyTestClass : IDisposable
{
const string ConnectionStr = "";
SqlConnection conn;
public MyTestClass()
{
this.conn = new SqlConnection(ConnectionStr);
}
public void Dispose()
{
this.conn?.Dispose();
this.conn = null;
}
public void Test()
{
using (var conn = new SqlConnection(ConnectionStr))
{
}
}
}
I basically have created a class which when a user logs into a website it then queries the database and stores some settings in a List (So I have key/pair values).
The reason for this is because I want to always be able to access these settings without going to the database again.
I put these in a class and loop through the fields via a SQL query and add them to the list.
How can I then access these variables from another part of the application? or is there a better way to do this? I'm talking server side and not really client side.
Here is an example of what I had at the moment:
public static void createSystemMetaData()
{
string constring = ConfigurationManager.ConnectionStrings["Test"].ConnectionString;
SqlConnection sql = new SqlConnection(constring);
sql.Open();
SqlCommand systemMetaData = new SqlCommand("SELECT * FROM SD_TABLES", sql);
//Set Modules
using (SqlDataReader systemMetaDataReader = systemMetaData.ExecuteReader())
{
while (systemMetaDataReader.Read())
{
var name = systemMetaDataReader.GetOrdinal("Sequence").ToString();
var value = systemMetaDataReader.GetOrdinal("Property").ToString();
var Modules = new List<KeyValuePair<string, string>>();
Modules.Add(new KeyValuePair<string, string>(name, value));
}
}
}
Thanks
Any static properties of a class will be preserved for the lifetime of the application pool, assuming you're using ASP.NET under IIS.
So a very simple class might look like:
public static class MyConfigClass
{
public static Lazy<Something> MyConfig = new Lazy<Something>(() => GetSomethings());
public static Something GetSomethings()
{
// this will only be called once in your web application
}
}
You can then consume this by simply calling
MyConfigClass.MyConfig.Value
For less users you can go with the SessionState as Bob suggested,however with more users you might need to move to a state server or load it from Data Base each time.
As others have pointed out, the risk of holding these values in global memory is that the values might change. Also, global variables are a bad design decision as you can end up with various parts of your application reading and writing to these values, which makes debugging problems harder than it need be.
A commonly adopted solution is to wrap your database access inside a facade class. This class can then cache the values if you wish to avoid hitting the database for each request. In addition, as changes are routed through the facade too, it knows when the data has changed and can empty its cache (forcing a database re-read) when this occurs. As an added bonus, it becomes possible to mock the facade in order to test code without touching the database (database access is notoriously difficult to unit test).
From the looks of things you are using universal values irrespective of users so an SqlCacheDependency would be useful here:
Make sure you setup a database dependency in web.config for the name Test
public static class CacheData {
public static List<KeyValuePair<string,string>> GetData() {
var cache = System.Web.HttpContext.Current.Cache;
SqlCacheDependency SqlDep = null;
var modules = Cache["Modules"] as List<KeyValuePair<string,string>>;
if (modules == null) {
// Because of possible exceptions thrown when this
// code runs, use Try...Catch...Finally syntax.
try {
// Instantiate SqlDep using the SqlCacheDependency constructor.
SqlDep = new SqlCacheDependency("Test", "SD_TABLES");
}
// Handle the DatabaseNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableNotifications method.
catch (DatabaseNotEnabledForNotificationException exDBDis) {
SqlCacheDependencyAdmin.EnableNotifications("Test");
}
// Handle the TableNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableTableForNotifications method.
catch (TableNotEnabledForNotificationException exTabDis) {
SqlCacheDependencyAdmin.EnableTableForNotifications("Test", "SD_TABLES");
}
finally {
// Assign a value to modules here before calling the next line
Cache.Insert("Modules", modules, SqlDep);
}
}
return modules;
}
I've got an MVC3 project and one of the models is built as a separate class library project, for re-use in other applications.
I'm using mini-profiler and would like to find a way to profile the database connections and queries that are made from this class library and return the results to the MVC3 applciation.
Currently, in my MVC3 app, the existing models grab a connection using the following helper class:
public class SqlConnectionHelper
{
public static DbConnection GetConnection()
{
var dbconn = new SqlConnection(ConfigurationManager.ConnectionStrings["db"].ToString());
return new StackExchange.Profiling.Data.ProfiledDbConnection(dbconn, MiniProfiler.Current);
}
}
The external model can't call this function though, because it knows nothing of the MVC3 application, or of mini-profiler.
One way I thought of would be to have an IDbConnection Connection field on the external model and then pass in a ProfiledDbConnection object to this field before I call any of the model's methods. The model would then use whatever's in this field for database connections, and I should get some profiled results in the MVC3 frontend.
However, I'm not sure if this would work, or whether it's the best way of doing this. Is there a better way I'm missing?
ProfiledDbConnection isn't dapper: it is mini-profiler. We don't provide any magic that can take over all connection creation; the only thing I can suggest is to maybe expose an event in your library that can be subscribed externally - so the creation code in the library might look a bit like:
public static event SomeEventType ConnectionCreated;
static DbConnection CreateConnection() {
var conn = ExistingDbCreationCode();
var hadler = ConnectionCreated;
if(handler != null) {
var args = new SomeEventArgsType { Connection = conn };
handler(typeof(YourType), args);
conn = args.Connection;
}
return conn;
}
which could give external code the chance to do whatever they want, for example:
YourType.ConnectionCreated += (s,a) => {
a.Connection = new StackExchange.Profiling.Data.ProfiledDbConnection(
a.Connection, MiniProfiler.Current);
};
I am using following code and want to know whether we require to set command timeout if using CreateSprocAccessor of enterprise library , if not then how timeout is being managed?
var accessor = _sqlDatabase.CreateSprocAccessor<xyz>("uspGetxyz",
new xyzParameters(_sqlDatabase),
MapBuilder<xyz>.MapAllProperties().Build());
//Execute the accessor to obtain the results
var Data = accessor.Execute();
xyzList = Data.ToList<xyz>();
I have started using Microsoft Enterprise Library long back where in normal case the DB operation calls using provided methods of “Database” class fulfill the need. In some case, for the long running query, developer wants to set CommandTimeout property of SqlCommand (or DbCommand) class. This will allow query to be executed long time as value set in command timeout.
By default Data Access Application block does not support/take simple CommandTimeout parameter in method calls (there are many workaround samples available on net). To achieve the same with minimal changes, I have added a simple function named “WithCommandTimeOut” taking timeOutSecond parameter in “Microsoft.Practices.EnterpriseLibrary.Data.Database” class which returns same instance of “Database” class. Refer updated code snippet below for code changes. Hope this will solve timeout Problem.
//Class Level Static Variables
//Used to reset to default after assigning in "PrepareCommand" static method
static int DEFAULT_COMMAND_TIMEOUT_RESET = 30;
//Default value when "WithCommandTimeOut" not called
static int COMMAND_TIMEOUT_FOR_THIS_CALL = DEFAULT_COMMAND_TIMEOUT_RESET;
public Database WithCommandTimeOut(int timeOutSeconds)
{
COMMAND_TIMEOUT_FOR_THIS_CALL = timeOutSeconds;
return this;
}
protected static void PrepareCommand(DbCommand command, DbConnection connection)
{
if (command == null) throw new ArgumentNullException("command");
if (connection == null) throw new ArgumentNullException("connection");
//Here is the magical code ----------------------------
command.CommandTimeout = COMMAND_TIMEOUT_FOR_THIS_CALL;
//Here is the magical code ----------------------------
command.Connection = connection;
//Resetting value to default as this is static and subsequent
//db calls should work with default timeout i.e. 30
COMMAND_TIMEOUT_FOR_THIS_CALL = DEFAULT_COMMAND_TIMEOUT_RESET;
}
Ex.
Database db = EnterpriseLibraryContainer.Current.GetInstance(Of Database)("SmartSoftware");
db.WithCommandTimeOut(0).ExecuteDataSet(CommandType.Text, query);
I can't believe it what blunder enterprise library team has made, they have not given any way to set command time out in case of Accessor, it is a know issue with them
http://entlib.codeplex.com/workitem/28586
cant believe it, i have developed whole project and just came to know this a know issue :-(wtf
We can update this in the connection string, increase Connection Timeout=1000;
You can modify DbCommand timeout in your xyzParameters class in AssignParameters method:
public void AssignParameters(
System.Data.Common.DbCommand command, object[] parameterValues)
{
command.CommandTimeout = 0;
...
}
Volla!!! i have made changes in source code of Enterprise library, added a new "execute" method which will take timeOut paramerter, in Sproc accessors class and used these binaries in my project