I am using following code and want to know whether we require to set command timeout if using CreateSprocAccessor of enterprise library , if not then how timeout is being managed?
var accessor = _sqlDatabase.CreateSprocAccessor<xyz>("uspGetxyz",
new xyzParameters(_sqlDatabase),
MapBuilder<xyz>.MapAllProperties().Build());
//Execute the accessor to obtain the results
var Data = accessor.Execute();
xyzList = Data.ToList<xyz>();
I have started using Microsoft Enterprise Library long back where in normal case the DB operation calls using provided methods of “Database” class fulfill the need. In some case, for the long running query, developer wants to set CommandTimeout property of SqlCommand (or DbCommand) class. This will allow query to be executed long time as value set in command timeout.
By default Data Access Application block does not support/take simple CommandTimeout parameter in method calls (there are many workaround samples available on net). To achieve the same with minimal changes, I have added a simple function named “WithCommandTimeOut” taking timeOutSecond parameter in “Microsoft.Practices.EnterpriseLibrary.Data.Database” class which returns same instance of “Database” class. Refer updated code snippet below for code changes. Hope this will solve timeout Problem.
//Class Level Static Variables
//Used to reset to default after assigning in "PrepareCommand" static method
static int DEFAULT_COMMAND_TIMEOUT_RESET = 30;
//Default value when "WithCommandTimeOut" not called
static int COMMAND_TIMEOUT_FOR_THIS_CALL = DEFAULT_COMMAND_TIMEOUT_RESET;
public Database WithCommandTimeOut(int timeOutSeconds)
{
COMMAND_TIMEOUT_FOR_THIS_CALL = timeOutSeconds;
return this;
}
protected static void PrepareCommand(DbCommand command, DbConnection connection)
{
if (command == null) throw new ArgumentNullException("command");
if (connection == null) throw new ArgumentNullException("connection");
//Here is the magical code ----------------------------
command.CommandTimeout = COMMAND_TIMEOUT_FOR_THIS_CALL;
//Here is the magical code ----------------------------
command.Connection = connection;
//Resetting value to default as this is static and subsequent
//db calls should work with default timeout i.e. 30
COMMAND_TIMEOUT_FOR_THIS_CALL = DEFAULT_COMMAND_TIMEOUT_RESET;
}
Ex.
Database db = EnterpriseLibraryContainer.Current.GetInstance(Of Database)("SmartSoftware");
db.WithCommandTimeOut(0).ExecuteDataSet(CommandType.Text, query);
I can't believe it what blunder enterprise library team has made, they have not given any way to set command time out in case of Accessor, it is a know issue with them
http://entlib.codeplex.com/workitem/28586
cant believe it, i have developed whole project and just came to know this a know issue :-(wtf
We can update this in the connection string, increase Connection Timeout=1000;
You can modify DbCommand timeout in your xyzParameters class in AssignParameters method:
public void AssignParameters(
System.Data.Common.DbCommand command, object[] parameterValues)
{
command.CommandTimeout = 0;
...
}
Volla!!! i have made changes in source code of Enterprise library, added a new "execute" method which will take timeOut paramerter, in Sproc accessors class and used these binaries in my project
Related
I am overthinking this and can't find the right terms to search on in Google.
I have a website that connects to a SQL Server database with mostly centralized classes to call out to the database.
I was able to copy/paste those classes and tweak them to connect to a postgres db by simple changes like changing:
This for connections
SqlConnection connection = new SqlConnection(ConnectionManagerToUse);
PgSqlConnection connection = new PgSqlConnection(ConnectionManagerToUse);
and this for commands:
SqlCommand SelectDataCommand = new SqlCommand();
PgSqlCommand SelectDataCommandPostgres = new PgSqlCommand();
This worked great so far by changing all instances of the above. I want to see if there is a way to have a class or something that is middle man that I can call to change between these 2 in the code so I only have to change code in this middle function to change between Postgres and SQL Server and not everywhere I call out to these commands in my code (obviously specific SQL calls and SQL syntax would be different but I am looking for ways to update the underlying code calling out to the database, not the actual SQL syntax).
Something like the below. I would call this instead of the above (I know this does not work this is just for example) then if I want to change database technology I just change it with a single variable or something in this centralized class/config files (or even have ability to connect to 2 different databases with different technologies at once.
For connections example:
public T GetConnection<T>()
{
if (_DBTechnology == "MSSQL")
{
return new SqlConnection(_connectionStringMSSQL);
}
else if (_DBTechnology == "POSTGRES")
{
return new PgSqlConnection(_connectionStringPostgres));
}
}
I know the above is not valid and I have only used generics a few times so this may not be the way to do this at all. So any suggestions would be helpful.
I was able to come up with solution for this. Below is the basic code to do what I was looking to do. I am working on expanding for more versitiality and usability for multile scenerios but this covers the basics.
This works because the classes/methods for both SQL server and Postgres work similar/the same.
The drivers I used for the Postgres were Devart.Data ande Devart.Data.PostgresSql (found through Nuget)
// set variable for db to use. Classes use MS for sql server, and PG for Postgres
string DBToUse = "MS"; //"PG" for Postgres
// Call to get the empty connection
DbConnection connection = GetConnection(DBToUse)
//Call the class to create empty connection:
DbCommand SelectDataCommandToBuild = GetCommand(DBToUse);
// here are classes to call to get the correct DB connections
public static DbConnection GetConnection(string DBToUse)
{
DbConnection connection = null;
if (DBToUse == "MS")
{
connection = new SqlConnection(_connectionStringMSSQL);
}
else if (DBToUse == "PG")
{
connection = new PgSqlConnection(_connectionStringPostgres);
}
// default just return null
return connection;
}// generic SQL Connection
public static DbCommand GetCommand(string DBToUse)
{
DbCommand command = null;
if (DBToUse == "MS")
{
command = new SqlCommand();
}
else if (DBToUse == "PG")
{
command = new PgSqlCommand();
}
// default just return null
return command;
}// generic SQL Command
In an ASP.NET MVC application, I'm trying to use SQL Server's CONTEXT_INFO to pass the currently logged in user so my audit triggers record not only the web server login, but also the login of the site.
I'm having trouble being certain that the current user will always be fed into the database server context though.
On the backend I have everything set up, a sproc to set the context, a function to pull it and DML triggers to record, no problem.
The app end is a bit more involved. I subscribe to the Database.Connection.StateChange event so I can catch each newly opened connection and set this context accordingly.
Additionally, to be able to retrieve the current login ID of the MVC site in the data layer (which has no access to the web project), I supply a delegate to the EF constructor that will return the user ID. This also means that any other peripheral projects I have set up require this dependency as well, and it keeps most of the implementation detail out of my hair during the web dev:
public class CoreContext : DbContext
{
Func<int> _uidObtainer;
public CoreContext(Func<int> uidObtainer) : base(nameof(CoreContext)) { construct(uidObtainer); }
public CoreContext(Func<int> uidObtainer, string connection) : base(connection) { construct(uidObtainer); }
void construct(Func<int> uidObtainer) {
// disallow updates of the db from our models
Database.SetInitializer<CoreContext>(null);
// catch the connection change so we can update for our userID
_uidObtainer = uidObtainer;
Database.Connection.StateChange += connectionStateChanged;
}
private void connectionStateChanged(object sender, System.Data.StateChangeEventArgs e) {
// set our context info for logging
if (e.OriginalState == System.Data.ConnectionState.Open ||
e.CurrentState != System.Data.ConnectionState.Open) {
return;
}
int uid = _uidObtainer();
var conn = ((System.Data.Entity.Core.EntityClient.EntityConnection)sender).StoreConnection;
var cmd = conn.CreateCommand();
cmd.CommandText = "audit.SetContext";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Parameters.Add(new System.Data.SqlClient.SqlParameter("#DomainUserID", uid));
cmd.ExecuteNonQuery();
}
// etc etc...
In my MVC project, I'll have code that looks like this:
context = new Data.CoreContext(() => AppService.UserID());
(making use of a readily accessible method to pass as delegate, which in turn reads from HttpContext.Current.User)
This is all shaping up nicely, except one unknown:
I know that it's possible for a EF Context instance to span multiple logged in users as this lives as part of the IIS app pool and not per HttpContext
What I don't know is enough about connection pooling and how connections are opened/re-opened to be safe in knowing that for each time my StateChange handler runs, I'll actually be retrieving the new UserID from the delegate.
Said differently: is it possible for a single connection to be open and used over the span of two separate HttpContext instances? I believe yes, seeing as how there's nothing to enforce otherwise (at least not that I'm aware of).
What can I do to ensure that each connection is getting the current HttpContext?
(possibly pertinent notes: There's no UoW/Repository pattern outside of EF itself, and data contexts are generally instantiated once per controller)
I see: the one context per controller is generally incorrect. Instead I should be using one context per request, which (besides other advantages), ensures my scenario operates correctly as well.
I found this answer, which explains the reasoning behind it: One DbContext per web request... why?
And I found this answer, which explains quite succinctly how to implement via BeginRequest and EndRequest: One DbContext per request in ASP.NET MVC (without IOC container)
(code from second answer pasted below to prevent linkrot)
protected virtual void Application_BeginRequest()
{
HttpContext.Current.Items["_EntityContext"] = new EntityContext();
}
protected virtual void Application_EndRequest()
{
var entityContext = HttpContext.Current.Items["_EntityContext"] as EntityContext;
if (entityContext != null)
entityContext.Dispose();
}
And in your EntityContext class...
public class EntityContext
{
public static EntityContext Current
{
get { return HttpContext.Current.Items["_EntityContext"] as EntityContext; }
}
}
I have an MVC setup, the Controller's Index() method launches a query against a database which then gets serialized to JSON and returned to the View. Almost each Controller is used only to display data so there are multiple such methods. As an example:
public string getSalesReport()
{
return queryForJsonResult(ConfigurationManager.AppSettings["RetrieveSalesReport"]);
}
Is the method run. The appSetting I pass to it is the query (which is definitely correct). This method then does the following:
private string queryForJsonResult(string query)
{
SqlConnection connection = new SqlConnection(_connectionString);
connection.Open();
SqlCommand command = new SqlCommand(query, connection);
SqlDataReader result = command.ExecuteReader();
string jsonResult = bufferedResultBuilder(result);
connection.Close();
return jsonResult;
}
private string bufferedResultBuilder(SqlDataReader result)
{
// Takes a result set and uses a buffer to build a JSON string.
StringWriter stringWriter = new StringWriter(new StringBuilder());
JsonTextWriter jsonTextWriter = new JsonTextWriter(stringWriter);
jsonTextWriter.WriteStartArray();
while (result.Read())
{
jsonTextWriter.WriteStartObject();
for (int i = 0; i < result.FieldCount; i++)
{
jsonTextWriter.WritePropertyName(result.GetName(i));
jsonTextWriter.WriteValue(result[i]);
}
jsonTextWriter.WriteEndObject();
}
jsonTextWriter.WriteEndArray();
return stringWriter.ToString();
}
So it takes the result set and builds a JSON string out of it, one entry at a time. When I run the project, all of this seems to work just fine, the data is loaded properly. I also have unit tests which utilize Stopwatch to test how long these Controller methods take to run. When I run the unit test, however, this one particular entry catches an error:
Result Message: Test method
Dashboard.Tests.BenchmarkTests.salesOverviewLoadTime threw exception:
System.InvalidOperationException: ExecuteReader: CommandText property
has not been initialized
The query is definitely correct, and there are several other very similar examples which differ only in the initial query that is input. All of those methods work just fine. The stack trace for this error traces it back to queryForJsonResult(), SqlCommand.ExecuteReader(), but like I said it only does this for one of the many very similar methods used to display views.
Why is it that one query, which actually loads the data correctly, catches an error when I unit test? The unit tests are set up almost identically except for the Controller that is loaded and all of the other ones work just fine.
This is an issue that is still strange to me. As #MichaelLiu pointed out, my .config files did not have the same appSettings. I copied over all of the appSettings that contained my queries and this resolved the issue. This does not necessarily explain, however, why all the other queries worked fine. I think I may have had one line of difference in the way the code exception handled e.g.
try {} catch {return;}
return;
as opposed to
try {
return;
} catch {
return;
}
Though there shouldn't be anything going wrong with return.
In any case, the solution is to make sure that both your project and it's unit test counterparts have same .config specifications, as needed.
I basically have created a class which when a user logs into a website it then queries the database and stores some settings in a List (So I have key/pair values).
The reason for this is because I want to always be able to access these settings without going to the database again.
I put these in a class and loop through the fields via a SQL query and add them to the list.
How can I then access these variables from another part of the application? or is there a better way to do this? I'm talking server side and not really client side.
Here is an example of what I had at the moment:
public static void createSystemMetaData()
{
string constring = ConfigurationManager.ConnectionStrings["Test"].ConnectionString;
SqlConnection sql = new SqlConnection(constring);
sql.Open();
SqlCommand systemMetaData = new SqlCommand("SELECT * FROM SD_TABLES", sql);
//Set Modules
using (SqlDataReader systemMetaDataReader = systemMetaData.ExecuteReader())
{
while (systemMetaDataReader.Read())
{
var name = systemMetaDataReader.GetOrdinal("Sequence").ToString();
var value = systemMetaDataReader.GetOrdinal("Property").ToString();
var Modules = new List<KeyValuePair<string, string>>();
Modules.Add(new KeyValuePair<string, string>(name, value));
}
}
}
Thanks
Any static properties of a class will be preserved for the lifetime of the application pool, assuming you're using ASP.NET under IIS.
So a very simple class might look like:
public static class MyConfigClass
{
public static Lazy<Something> MyConfig = new Lazy<Something>(() => GetSomethings());
public static Something GetSomethings()
{
// this will only be called once in your web application
}
}
You can then consume this by simply calling
MyConfigClass.MyConfig.Value
For less users you can go with the SessionState as Bob suggested,however with more users you might need to move to a state server or load it from Data Base each time.
As others have pointed out, the risk of holding these values in global memory is that the values might change. Also, global variables are a bad design decision as you can end up with various parts of your application reading and writing to these values, which makes debugging problems harder than it need be.
A commonly adopted solution is to wrap your database access inside a facade class. This class can then cache the values if you wish to avoid hitting the database for each request. In addition, as changes are routed through the facade too, it knows when the data has changed and can empty its cache (forcing a database re-read) when this occurs. As an added bonus, it becomes possible to mock the facade in order to test code without touching the database (database access is notoriously difficult to unit test).
From the looks of things you are using universal values irrespective of users so an SqlCacheDependency would be useful here:
Make sure you setup a database dependency in web.config for the name Test
public static class CacheData {
public static List<KeyValuePair<string,string>> GetData() {
var cache = System.Web.HttpContext.Current.Cache;
SqlCacheDependency SqlDep = null;
var modules = Cache["Modules"] as List<KeyValuePair<string,string>>;
if (modules == null) {
// Because of possible exceptions thrown when this
// code runs, use Try...Catch...Finally syntax.
try {
// Instantiate SqlDep using the SqlCacheDependency constructor.
SqlDep = new SqlCacheDependency("Test", "SD_TABLES");
}
// Handle the DatabaseNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableNotifications method.
catch (DatabaseNotEnabledForNotificationException exDBDis) {
SqlCacheDependencyAdmin.EnableNotifications("Test");
}
// Handle the TableNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableTableForNotifications method.
catch (TableNotEnabledForNotificationException exTabDis) {
SqlCacheDependencyAdmin.EnableTableForNotifications("Test", "SD_TABLES");
}
finally {
// Assign a value to modules here before calling the next line
Cache.Insert("Modules", modules, SqlDep);
}
}
return modules;
}
I am building an application with c# and I decided to use the Enterprise Library for the DAL (SQL Server).
I don't remember where, but I had read an article about EntLib which said that the connections are closed automatically.
Is it true?
If not, what is the best approach of managing the connections in the middle layer?
Open and close in each method?
The above is a sample method of how I am using the EntLib
public DataSet ReturnSomething
{
var sqlStr = "select something";
DbCommand cmd = db.GetSqlStringCommand(sqlStr);
db.AddInParameter(cmd, "#param1", SqlDbType.BigInt, hotelID);
db.AddInParameter(cmd, "#param2", SqlDbType.NVarChar, date);
return db.ExecuteDataSet(cmd);
}
Thanks in advance.
the ExecuteDataSet method returns a DataSet object that contains all the data. This gives you your own local copy. The call to ExecuteDataSet opens a connection, populates a DataSet, and closes the connection before returning the result
for more info:
http://msdn.microsoft.com/en-us/library/ff648933.aspx
I think you should have something like a static class used as a Façade which would provide the correct connection for your library subsystems.
public static class SystemFacade {
// Used as a subsystem to which the connections are provided.
private static readonly SystemFactory _systemFactory = new SystemFactory();
public static IList<Customer> GetCustomers() {
using (var connection = OpenConnection(nameOfEntLibNamedConnection))
return _systemFactory.GetCustomers(connection);
}
public static DbConnection OpenConnection(string connectionName) {
var connection =
// Read EntLib config and create a new connection here, and assure
// it is opened before you return it.
if (connection.State == ConnectionState.Closed)
connection.Open();
return connection;
}
}
internal class SystemFactory {
internal IList<Customer> GetCustomers(DbConnection connection) {
// Place code to get customers here.
}
}
And using this code:
public class MyPageClass {
private void DisplayCustomers() {
GridView.DataSource = SystemFacade.GetCustomers();
}
}
In this code sample, you have a static class that provides the functionalities and features of a class library. The Façade class is used to provide the user with all possible action, but you don't want to get a headache with what connection to use, etc. All you want is the list of customers out of the underlying datastore. Then, a call to GetCustomers will do it.
The Façade is an "intelligent" class that knows where to get the information from, so creates the connection accordingly and order the customers from the subsystem factory. The factory does what it is asked for, take the available connection and retrieve the customers without asking any further questions.
Does this help?
Yes, EntLib closes connections for you (actually it releases them back into the connection pool). That is the main reason why we originally started to use EntLib.
However, for all new development we have now gone on to use Entity Framework, we find that much more productive.