Simple SqlCacheDependency - c#

Almost every tutorial I have read seems to incorrectly setup SqlCacheDependency. I believe they normally mix up the outdated polling method with the query notification method.
Here are two of many examples:
Web Caching with SqlCacheDependency Simplified (non-microsoft)
SqlCacheDependency Class (Microsoft)
Based on my testing, if you are using the broker (MSSQL 2015+) you don't need to make any .config changes nor do you need to make any SqlCacheDependencyAdmin calls (Don't need to define tables, etc).
I simplify just do this...
SqlDependency.Start(connString)
...
queryString = "SELECT ...";
cacheName = "SqlCache" + queryString.GetHashCode();
...
using (var connection = new SqlConnection(connString))
{
connection.Open();
var cmd = new SqlCommand(queryString, connection)
{
Notification = null,
NotificationAutoEnlist = true
};
var dependency = new SqlCacheDependency(cmd);
SqlDataReader reader = cmd.ExecuteReader();
try
{
while (reader.Read())
{
// Set the result you want to cache
data = ...
}
}
finally
{
reader.Close();
}
HostingEnvironment.Cache.Insert(cacheName, data, dependency);
}
(The code that checks if the cache is null or not is not included, as that's all just setup. I just want to show the setting of the cache)
This seems to work without the need to define which tables are involved in the query and make complicated triggers on each table. It just works.
More surprising to me is that the rules for making a query have notification :
Creating a Query for Notification (Can't find documentation newer than 2008) don't seem to apply. I purpose to do a TOP in my SQL and it still works.
For a test, I have it run a query 1000 times involving a table named "Settings". Then I update a value in the table and repeat the query.
I watch the Profiler for any queries involving the word "Settings" and I see the query is executed just 1 time (to set the cache) and then the update statement occurs, and then the query is re-executed one more time (the cache was invalidated and the query ran again)
I am worried that in my 2-3 hours of struggling with the proper way to do this I am missing something and it really is this simple?
Can I really just put any query I want and it'll just work? I am looking for any pointers where I am doing something dangerous/non-standard or any small print that I am missing

var dependency = new SqlCacheDependency(cmd);
when you write query like this you autiomatically define table name in it.Your connection already has db name.
It is non explicit way to do same.
Explicit way to catch exception and to know what went wrong is this.
// Declare the SqlCacheDependency instance, SqlDep.
SqlCacheDependency SqlDep = null;
// Check the Cache for the SqlSource key.
// If it isn't there, create it with a dependency
// on a SQL Server table using the SqlCacheDependency class.
if (Cache["SqlSource"] == null) {
// Because of possible exceptions thrown when this
// code runs, use Try...Catch...Finally syntax.
try {
// Instantiate SqlDep using the SqlCacheDependency constructor.
SqlDep = new SqlCacheDependency("Northwind", "Categories");
}
// Handle the DatabaseNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableNotifications method.
catch (DatabaseNotEnabledForNotificationException exDBDis) {
try {
SqlCacheDependencyAdmin.EnableNotifications("Northwind");
}
// If the database does not have permissions set for creating tables,
// the UnauthorizedAccessException is thrown. Handle it by redirecting
// to an error page.
catch (UnauthorizedAccessException exPerm) {
Response.Redirect(".\\ErrorPage.htm");
}
}
// Handle the TableNotEnabledForNotificationException with
// a call to the SqlCacheDependencyAdmin.EnableTableForNotifications method.
catch (TableNotEnabledForNotificationException exTabDis) {
try {
SqlCacheDependencyAdmin.EnableTableForNotifications("Northwind", "Categories");
}
// If a SqlException is thrown, redirect to an error page.
catch (SqlException exc) {
Response.Redirect(".\\ErrorPage.htm");
}
}
// If all the other code is successful, add MySource to the Cache
// with a dependency on SqlDep. If the Categories table changes,
// MySource will be removed from the Cache. Then generate a message
// that the data is newly created and added to the cache.
finally {
Cache.Insert("SqlSource", Source1, SqlDep);
CacheMsg.Text = "The data object was created explicitly.";
}
}
else {
CacheMsg.Text = "The data was retrieved from the Cache.";
}

As documented in https://learn.microsoft.com/en-us/dotnet/api/system.web.caching.sqlcachedependency?view=netframework-4.8 "Using a SqlCacheDependency object with SQL Server 2005 query notification does not require any explicit configuration."
So, the CMD has explicit table names in it, and ADO.net is issuing the correct Service Broker configuration commands for you. When the table is updated, SQL Server posts a Service Broker message saying the table has been updated. When ADO.net validates the CMD it checks the explicit tables in the broker for updates.
This is why the SQlCacheDependency associated CMD must use explicit tables.

Related

How to achieve mutual exclusion on a database record in linq sql

I have a distributed client server system which has multiple 'worker' engines and a WCF service, lets call it, FileManagerService, which adds 'jobs' to a database for the 'worker' engines to pick up and process.
The database consists of two tables: Jobs and File. Every job operates on a File ( many to one relationship ). Multiple jobs can be added to the Jobs table at once, they are processed in the fashion of a queue.
Example of Job's are actions such as : MoveFile or DeleteFile, etc.
I am trying to achieve mutual exclusion on the File record by using a 'locked' ( bool ) column in the File table itself.
The mutual exclusion zones that I think are needed are within the WCF service FileManagerService's method: addJob() and within the 'worker' engines' TakeAndProcessAJob().
At the moment I am having trouble with working out how linq-to-sql deferred execution and how the linq-to-sql transaction system works.
I currently have the following method which is supposed to take a Job from the Jobs table, check if its related File is locked, if it is locked we will return nothing, if it is not locked we will lock the File and return a Job, and delete that Job from the Jobs queue:
public static Job GetAnAvailableJob()
{
using (DBDataContext db = new DBDataContext())
{
DataLoadOptions loadOptions = new DataLoadOptions();
loadOptions.LoadWith<Job>(f => f.File);
db.LoadOptions = loadOptions;
var jobAvailable = from ja in db.Jobs
where ja.File.locked == false
select ja;
var jobToTake = jobAvailable.FirstOrDefault();
// This file temp is here so that we can return
// an associated File with the Job.
// We delete
File fileTemp = null;
if (jobToTake != null)
{
fileTemp = jobToTake.File;
jobAvailable.File.locked = true;
Console.WriteLine("Locked file:" + fileTemp.FileID);
db.Jobs.DeleteOnSubmit(jobToTake);
db.SubmitChanges();
jobToTake.Asset = fileTemp;
}
return jobToTake;
}
}
The 'worker' engines TakeAndProcessAJob() basically does the following: Call GetAnAvailableJob(), if return object is not null -> proccess the job, if it is null -> sleep.
The WCF Service FileManagerService uses this locking method:
public static bool LockAsset(long FileID)
{
using (DBDataContext db = new DBDataContext())
{
var fileToLock = db.Files.Where(f => f.FileID == FileID && f.locked == false).Single();
if (fileToLock == null)
{
return false;
}
else
{
Console.WriteLine("Locked File:" + fileToLock.AssetID);
fileToLock.locked = true;
db.SubmitChanges();
return true;
}
}
}
What do I need to change in these two methods to make then behave in a transactional manner and have mutual exclusion on the File record. Currently I am getting change conflict exceptions - row not changed or found - on the SubmitChanges() within the TakeAndProcessAJob() method.
Thanks
That "row not changed or found" error can be something as simple as your database model file not matching the actual physical database schema.
Another reason might be that LINQ didn't detect any changes to submit. Say if the record you were selecting was already true and you mark it as true. You could avoid this by not updating if its already true or possibly the cause is there's a setting on the modal file under the properties for that field on the table that you can actually turn off UpdateCheck, but it doesn't sound like thats what you want.
In an unrelated story:
doesn't ".Single()" throw an error when nothing is found as aposed to firstOrDefault() which just returns null ?
I could not figure out the answer to my question and so I did some logic restructuring and devised a workaround using plain old SQL. To lock the record I now use a single line ( atomic ) command on the record:
UPDATE dataTable SET locked = 'True' WHERE (locked = 'False') AND (recordID= 15)
I then use SqlDataReader and check the RecordsAffected field. If records have changed, it means that I have successfully locked the record. If records have not changed, it means the record is already locked, and so I return false on my locking method.
The process that is trying to lock the record then sits in a while loop calling lockRecord and waiting for a successful callback.
I'm not sure if it was completely necessary to make the switch from LINQ to SQL. I made the logic changes in my application, and the switch to plan SQL at the same time and so it is difficult to identify if using LINQ was THE problem.

Why does my SqlCacheDependency HasChanged come back false but almost immediately after changes to true?

I cannot figure out why the HasChanged value of my SqlCacheDependency object is coming back originally from the command execution as false, but somewhere almost immediately after it comes back from the database, the value changes to true.
Sometimes this happens before the item is even inserted into the cache, causing the cache to discard it immediately, sometimes it's after the insert, and I can grab an enumerator which sees the key in the cache but before I even loop to that item in the cache it's been deleted.
SPROC:
ALTER PROCEDURE [dbo].[ntz_dal_ER_X_Note_SelectAllWER_ID]
#ER_ID int
AS
BEGIN
SELECT
ER_X_Note_ID,
ER_ID,
Note_ID
FROM dbo.ER_X_Note e
WHERE
ER_ID = #ER_ID
END
The database is MS SQL Server 2008, broker service is enabled, and SOME output does cache and remain cached. For instance, this one works just fine:
ALTER PROC [dbo].[ntz_dal_GetCacheControllerByEntityName] (
#Name varchar(50)
) AS
BEGIN
SELECT
CacheController_ID,
EntityName,
CacheEnabled,
Expiration
From dbo.CacheController cc
WHERE EntityName = #Name
END
The code which calls the SPROC in question that fails:
DataSet toReturn;
Hashtable paramHash = new Hashtable();
paramHash.Add("ER_ID", _eR_ID.IsNull ? null : _eR_ID.Value.ToString());
string cacheName = BuildCacheString("ntz_dal_ER_X_Note_SelectAllWER_ID", paramHash);
toReturn = (DataSet)GetFromCache(cacheName);
if (toReturn == null)
{
// Set up parameters (1 input and 0 output)
SqlParameter[] arParms = {
new SqlParameter("#ER_ID", _eR_ID),
};
SqlCacheDependency scd;
// Execute query.
toReturn = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, "dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms)
: _dbConnection.ExecuteDataset("dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms);
AddToCache(cacheName, toReturn, scd);
}
return toReturn;
Code that works
const string sprocName = "ntz_dal_GetCacheControllerByEntityName";
string cacheControlPrefix = "CacheController_" + CachePrefix;
CacheControl controller = (CacheControl)_cache[cacheControlPrefix];
if (controller == null)
{
try
{
SqlParameter[] arParms = {
new SqlParameter("#Name", CachePrefix),
};
SqlCacheDependency sqlCacheDependency;
// Execute query.
DataSet result = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, sprocName, out sqlCacheDependency, arParms)
: _dbConnection.ExecuteDataset(sprocName, out sqlCacheDependency, arParms);
controller = result.Tables[0].Rows.Count == 0
? new CacheControl(false)
: new CacheControl(result.Tables[0].Rows[0]);
_cache.Insert(cacheControlPrefix, controller, sqlCacheDependency);
}
catch (Exception ex)
{
// if sproc retreival fails cache the result of false so we don't keep trying
// this is the only case where it can be added with no expiration date
controller = new CacheControl(false);
// direct cache insert, no dependency, no expiration, never try again for this entity
if (HttpContext.Current != null && UseCaching && _cache != null) _cache.Insert(cacheControlPrefix, controller);
}
}
return controller;
The AddToCache method is overloaded and has more tests in it; The direct _cache.Insert in the working method is to bypass those other tests. The working code helps determine if db caching should happen at all.
You can see that when the "non working" data is retrieved initially, all is OK:
But somewhere random beyond that point, in this instance, just stepping into the next method
And yet the data is NOT changing at all; I'm the only one touching this instance of the database.
It was really, really simple, so simple I completely overlooked it.
In this article Creating a Query for Notification, which I DID scour multiple times, it clearly states:
SET Option Settings
When a SELECT statement is executed under a notification request, the
connection that submits the request must have the options for the
connection set as follows:
ANSI_NULLS ON
ANSI_PADDING ON
ANSI_WARNINGS ON
CONCAT_NULL_YIELDS_NULL ON
QUOTED_IDENTIFIER ON
NUMERIC_ROUNDABORT OFF
ARITHABORT ON
Well, I read and re-read and RE-re-read the sproc, and I still didn't see that both ANSI_NULLS and QUOTED_IDENTIFIER were "OFF", not ON.
My dataset is now caching and retaining the data properly without false indicators of change.
I have a hunch that the issue is with your _eR_ID. I think that you should try adding a local variable to the failing procedure that uses an impossible value for _eR_ID, such as -1. I never trust what is going to happen when nulls are involved and I think this could be the source of your problem.
Here is the modified version that I recommend trying:
DataSet toReturn;
Hashtable paramHash = new Hashtable();
int local_er_ID = eR_ID.IsNull ? -1 : _eR_ID.Value;
paramHash.Add("ER_ID", local_eR_ID.ToString());
string cacheName = BuildCacheString("ntz_dal_ER_X_Note_SelectAllWER_ID", paramHash);
toReturn = (DataSet)GetFromCache(cacheName);
if (toReturn == null)
{
// Set up parameters (1 input and 0 output)
SqlParameter[] arParms = {
new SqlParameter("#ER_ID", local_eR_ID),
};
SqlCacheDependency scd;
// Execute query.
toReturn = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, "dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms)
: _dbConnection.ExecuteDataset("dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms);
AddToCache(cacheName, toReturn, scd);
}
return toReturn;
Important
While creating the above code, I think I discovered the source of your problem: when setting the stored proc parameter, you are using _eR_ID but when you set the paramHash you are using _eR_ID.Value.
The code rewrite will solve this problem, but I suspect that this is the root of the problem.
Running into the same issue and finding the same answers online without any help, I was reasearching the xml invalid subscription response from profiler.
I found an example on msdn support site that had a slightly different order of code. When I tried it I realized the problem - Don't open your connection object until after you've created the command object and the cache dependency object. Here is the order you must follow and all will be good:
Be sure to enable notifications (SqlCahceDependencyAdmin) and run SqlDependency.Start first
Create the connection object
Create the command object and assign command text, type, and connection object (any combination of constructors, setting properties, or using CreateCommand).
Create the sql cache dependency object
Open the connection object
Execute the query
Add item to cache using dependency.
If you follow this order, and follow all other requirements on your select statement, don't have any permissions issues, this will work!
I believe the issue has to do with how the .NET framework manages the connection, specifically what settings are set. I tried overriding this in my sql command test but it never worked. This is only a guess - what I do know is changing the order immediately solved the issue.
I was able to piece it together from the following to msdn posts.
This post was one of the more common causes of the invalid subscription, and shows how the .Net client sets the properties that are in contrast to what notification requires.
https://social.msdn.microsoft.com/Forums/en-US/cf3853f3-0ea1-41b9-987e-9922e5766066/changing-default-set-options-forced-by-net?forum=adodotnetdataproviders
Then this post was from a user who, like me, had reduced his code to the simplest format. My original code pattern was similar to his.
https://social.technet.microsoft.com/Forums/windows/en-US/5a29d49b-8c2c-4fe8-b8de-d632a3f60f68/subscriptions-always-invalid-usual-suspects-checked-no-joy?forum=sqlservicebroker
Then I found this post, also a very simple reduction of the problem, only his was a simple issue - needing 2 part name for tables. In his case the suggestion resolved the issue. After looking at his code I noticed the main difference was waiting to open the connection object until AFTER the command object AND the dependency object were created. My only assumption is under the hood (I have not yet started reflector to check so only an assumption) the Connection object is opened differently, or order of events and command happen differently, because of this association.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bc9ca094-a989-4403-82c6-7f608ed462ce/sql-server-not-creating-subscription-for-simple-select-query-when-using-sqlcachedependency?forum=sqlservicebroker
I hope this helps someone else in a similar issue.

IDataReader for Generic Access

I'm trying to make a "Helper" class where you simply pass the driver and the parameters and the class does the connection and connection string assembly for you.
I've been using the interfaces from System.Data like IDataReader, IDbConnection.
Now after testing it with MySQL the code is working perfectly but as soon as I point and configure it for SQL Server (Microsoft) it does not return any processed rows. I have done some debugging and the info from the SQL server is appearing in the IDataReader but it seems I can't Iterate over it ?
My Current Code:
Connect Method in Helper Class
try
{
factory = System.Data.Common.DbProviderFactories.GetFactory(driver);
_con = factory.CreateConnection();
_con.ConnectionString = buildConnectionString.ToString();
_con.Open();
}
catch (System.Data.Common.DbException ex)
{
_con = null;
throw ex;
}
catch (Exception ex)
{
_con = null;
throw ex;
}
return _con;
A the moment I'm passing my driver as System.Data.SqlClient for SQL Server and MySql.Data.MySqlClient for MySQL.
System.Data.IDataReader reader = cmd.ExecuteReader(System.Data.CommandBehavior.CloseConnection);
while (reader.Read())
{
System.Data.DataRow row = table.NewRow();
// Insert info from Reader into the Row
table.Rows.Add(row);
}
reader.Close();
I suspects it has something to do with how IDataReader is trying to handle the types but can't find any documentation on this as it's working perfectly for MySQL but not for SQL Server? Any Help?
You aren't giving many clues here, since most of the interesting code here is probably around the command setup. If, at execution, it never enters the while (reader.Read()) {...} block, then it is probably TSQL or parameter related (especially nulls, which can easily result in no rows).
Since your data is DataTable-centric and you already have the provider-factory, another possibility here is to use CreateDataAdapter() from the factory, and let the factory worry about the binding of TSQL to a DataTable. Otherwise, treble-check that the TSQL you are providing is valid, sensible, and correctly parameterised.
Ultimately, the Read() loop itself is fine, and is pretty-much what all materialization routines do. It is, for example, very close to how dapper works, and that works fine over a range of databases.

preventing my application from sql injection?

I am making a project in which i have a login page.
i am restricting user to enter
AND OR NOT XOR & | ^
is this enough to prevent my application from SQL Injection?
No, not at all.
For example, I could still enter my username as:
; DELETE FROM Users --
Which could still, depending on your DB structure and application code, wipe your entire Users table.
To adequately protect yourself from SQL Injection attacks you should escape any user input and use either parameterized queries or stored procedures (and if you're using stored procedures, be sure you don't have dynamically generated SQL inside the stored procedure) to interact with the database.
You shouldn't bother looking for special words / characters in their username /password. Because you will ALWAYS miss something.
Instead, if you have embedded SQL you should be using parameterized queries. If you do that for all of your queries then you'll be safe from sql injection. Now, XSS is whole other matter.. ;)
This has been covered in depth on this site, just search for sql injection.
Using Stored Procedures or parameterized queries will prevent SQL injection.
1) In addition to that, if you are using ASP.NET, you can enable the page level attribute "ValidateRequest = True" which can validate if any of the input string can lead to Script injection
2) Make sure you dont display the actual system generated error to the end user. That will give a lead to the hacker to probe further and break the system.
3) If you are using a webservice to consume and sync the data to your database, validate all the necessary fields before persisting the data.
Definitely not!
The simplest possible way to avoid SQL injection is by using parameterized queries.
See this SO question: Preventing SQL Injection in ASP.Net VB.Net and all of its answers to give you an idea.
In short, I never use concatenated string queries, but ALWAYS parameters. This way, there is no danger at all, and this is the most secure way to prevent SQL injection.
Here is a good stack overflow link: What is SQL injection?
Secondly, don't forget that it doesn't matter what validation you do in the UI, people can always construct custom HTTP requests and send them to your server (trivial as editing using firebug).
Like others said. Parameterized inputs.
Here's a snipit from some code I wrote at work(removed work specific code). It's not perfect, but my main job is not programming and I was still researching on C# when I wrote this. If I wrote this now, I would have used a datareader instead of a dataset.
But notice how I use variables in the actual SQL string and assign the variables using "da.SelectCommand.Parameters.AddWithValue"
public Boolean Login(string strUserName, string strPassword)
{
SqlConnection sqlConn = new System.Data.SqlClient.SqlConnection();
DataSet ds = null;
SqlDataAdapter da = null;
sqlConn.ConnectionString = strConnString;
try
{
blnError = false;
sqlConn.Open();
ds = new DataSet();
da = new SqlDataAdapter("select iuserid from tbl_Table where vchusername = #vchUserName and vchpassword = #vchPassword", sqlConn);
da.SelectCommand.Parameters.AddWithValue("#vchUserName", strUserName);
da.SelectCommand.Parameters.AddWithValue("#vchPassword", strPassword);
da.SelectCommand.CommandTimeout = 30;
da.Fill(ds);
if (ds.Tables[0].Rows.Count > 0)
{
iUserId = (int)ds.Tables[0].Rows[0]["iuserid"];
}
}
catch (Exception ex)
{
blnError = true;
Log("Login: " + ex.Message);
}
finally
{
if (sqlConn.State != ConnectionState.Closed)
sqlConn.Close();
if (da != null)
da.Dispose();
if (ds != null)
ds.Dispose();
}
if (blnError)
return false;
if (iUserId > 0)
return true;
return false;
}
You should pass the values as parameters to a stored procedure. This way whatever the user enters is just treated as a value rather than appended to the statement and executed

Checking whether a database is available?

My problem involves checking if I have a valid database connection before reading from the database. If the database is down I'd like to write to a xml file instead. I have the location of the database (if it's up) at runtime so if the database was working I can create a new sqlConnection to it.
Use a typical try...catch...finally structure, and based on the specific exception type and message, decide whether you want to write to xml or not.
try
{
SqlConnection connection = new SqlConnection(DB("Your DB Name"));
connection.Open();
}
catch (Exception ex)
{
// check the exception message here, if it's telling you that the db is not available. then
//write to xml file.
WriteToXml();
}
finally
{
connection.Close();
}
I would just use something like:
using(SqlConnection conn = new SqlConnection(c)) {
conn.Open();
}
It will throw an exception if invalid. You could write to the xml in the exception.
An easy way would be to execute a simple query and see if an error occurs:
For Oracle:
SELECT * FROM DUAL
For SQL Server
SELECT 1
Basicly just some kind of relatively "free" query that will let you know that the database is up and running and responding to requests and your connection hasn't timed out.
You cannot really tell whether the DB is up and running without actually opening a connecting to it. But still, connection might be dropped while you're working with it, so this should be accounted for.

Categories