Related
I have a quite specific question, for a quite specific problem. My dev environment is pretty simple : an Oracle 11g database, C# web application (with ASP).
To connect to my database, I use the OracleManagedDataAccess nuget package and it worked fine until last month. I was able to add columns on my tables without any problem, but now, it returns an exception : index out of range when loading my DataReader into a DataTable.
string connectionString = "myConnectionStringToTheDB";
string requete = "SELECT * FROM MyTable";
using (var connexion = new OracleConnection(connectionString))
{
connexion.Open();
using (var commande = new OracleCommand(requete, connexion))
using (var dr = commande.ExecuteReader())
using (var datatable = new DataTable())
{
datatable.Load(dr); // => crashes here
foreach (DataRow row in datatable.Rows)
{
var campagne = GetFromDataReader<Campagne>(row, connectionString);
campagne.CodeSource = row["CODE_SOURCE"].ToString();
result.Add(campagne);
}
}
}
This code works when the app is running, no problem. Then I add a column to my table. Then I run this code again, and NOPE... I have to stop and restart the app (on IIS) to get it running fine.
The fact is that when my query includes a '*', it crashes. If I list all the columns of my table, it works.
This problem suddenly occured with no reason and it is really annoying as I need to add columns to my table even when there are users connected to the app. These newly added columns won't be used until the restart of the program of course...
Do you know how to solve this problem ?
Thanks !
After a few tries, I figured out what happened.
The error came from my version of the OracleManagedDataAccess DLL.
Versions post 18.6 automatically includes Request and Metadata Pooling.
-I had to rollback from 19.1 to 18.6
-I added => "Metadata Pooling=false;Statement Cache Purge=true;" to my connection string, and now it works fine again.
So I won't be able to update this DLL any further now. Well, anyway...
I'm working on rewriting the app we use to run upgrades on our database. Basically the idea of the app is it takes a bunch of scripts (one for each version) and runs each script between the current version of the database and the most recent version it has. I'm trying to find a better way for us to handle this process and I tried to make SQL Management Objects work the way we need them. For reference, here are the limitations I have to work with.
It has to handle GO statements (which SMO does)
It can't require us to modify the files we have. (This will be used with hundreds, maybe thousands of files and we don't want to have to go edit each one of them manually, so adding try catches are kinda out of the question)
It has to continue to the next GO statement if it encounters an error. This is mostly to match the way our current app works. If an error is encountered in one of the batches of the script we want it to continue on to the next one since they are most of the time unrelated.
If the script encounters an error, it has to output an error message so the user can know a version's upgrade didn't work, and so the developers can correct the error for the next version (here is the problem)
Here's what I currently have as a code:
string messages = "";
private void button1_Click(object sender, EventArgs e)
{
string setup = File.ReadAllText(#"[redacted]\Setup.sql");
string script = File.ReadAllText(#"[redacted]\6.3.6002.0.sql");
string script2 = File.ReadAllText(#"[redacted]\6.3.6003.0.sql");
var cnx = new SqlConnection(/*proper connection string*/);
var server = new Server(new ServerConnection(cnx));
//server.ConnectionContext.InfoMessage += ConnectionContext_InfoMessage;
server.ConnectionContext.ServerMessage += ConnectionContext_ServerMessage;
server.ConnectionContext.ExecuteNonQuery(setup);
server.ConnectionContext.ExecuteNonQuery(script);
server.ConnectionContext.ExecuteNonQuery(script2, ExecutionTypes.ContinueOnError);
txtResult.Text = messages;
}
private void ConnectionContext_ServerMessage(object sender, ServerMessageEventArgs e)
{
messages += e.Error.Message + "\r\n";
}
And here are the scripts I'm using:
Setup.sql:
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = N'UPGRADE_HISTORY')
DROP TABLE UPGRADE_HISTORY
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = N'TEST_CODE_TABLE')
DROP TABLE TEST_CODE_TABLE
CREATE TABLE UPGRADE_HISTORY (
UPDATE_DATE DATE NOT NULL,
VERSION_TXT VARCHAR(50) NOT NULL,
PRIMARY KEY (UPDATE_DATE, VERSION_TXT)
)
CREATE TABLE TEST_CODE_TABLE (
CODE_VALUE INT PRIMARY KEY,
DESCRIPTION_TXT VARCHAR(250) NOT NULL
)
INSERT INTO UPGRADE_HISTORY VALUES
(DATEADD(d, -3, GETDATE()), '6.2.5000'),
(DATEADD(d, -1, GETDATE()), '6.2.5001'),
(DATEADD(d, -1, GETDATE()), '6.2.5002'),
(DATEADD(d, -1, GETDATE()), '6.3.6000.0'),
(DATEADD(d, -1, GETDATE()), '6.3.6001.0')
INSERT INTO TEST_CODE_TABLE VALUES
(1001, 'Test Code Table'),
(1002, 'Test Code Table 2')
6.3.6602.0.sql:
INSERT INTO UPGRADE_HISTORY VALUES
(GETDATE(), '6.3.6001.0')
GO
PRINT 'Test Code Table Change'
GO
UPDATE TEST_CODE_TABLE SET DESCRIPTION_TXT = 'Test Code Table Change' WHERE CODE_VALUE = 1002;
GO
6.3.6003.0.sql:
INSERT INTO UPGRADE_HISTORY VALUES
(GETDATE(), '6.3.6003.0')
GO
PRINT 'Test Error'
GO
INSERT INTO CODE_TABLE VALUES (1001, 'Test')
--This will throw an error since this will conflict with the primary key
--of the code table (or you know, because I just noticed it doesn't call
--the right table, it's really relevant since I want it to throw an
--error, w/e what it is)
GO
PRINT 'Second Test Code Table Change'
GO
UPDATE TEST_CODE_TABLE SET DESCRIPTION_TXT = 'Test Code Table Change 2' WHERE CODE_VALUE = 1002;
--We still want this to execute.
GO
This is to reproduce a situation that can happen in our updates. So, as it is, the setup is only to create recreate the database each time so I can use the same scripts, then the first upgrade file is to simulate a functioning as intended file, then finally the 2nd upgrade file is to simulate an upgrade file that has an error. And this is where the problems start. As I've got it working at the moment, when the second script is executed, the first part runs, then the second part runs and errors out, but I don't get an error message. Neither the InfoMessage nor the ServerMessage events get fired. Then the third part runs (the one after the statement that errors out), and I get a ServerMessage for the print. For reference, here's the output I'm receiving:
Test Code Table Change
Test Error
Second Test Code Table Change
The print before and after the errors happen, and I can confirm from double checking the data that the UPDATE statement after the error is also executed. However, no message or error is thrown for the fact that the INSERT statement throws an error. We would really need to have SMO throw an error, or trigger the ServerMessage event, or anything really. Is there something I'm missing, or is this a shortcoming of the framework.
I have an intermittent Devart.Data.Linq.ChangeConflictException: Row not found or changed raising it's ugly head. The funny thing is, the change is still written to the database!
The stack trace says:
Devart.Data.Linq.ChangeConflictException: Row not found or changed.
at Devart.Data.Linq.Engine.b4.a(IObjectEntry[] A_0, ConflictMode A_1, a A_2)
at Devart.Data.Linq.Engine.b4.a(ConflictMode A_0)
at Devart.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at Devart.Data.Linq.DataContext.SubmitChanges()
at Billing.Eway.EwayInternal.SuccessCustomerRenewal(String username, Bill bill, EwayTransaction transaction) in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\EwayInternal.cs:line 552
at Billing.Eway.Eway.BillAllUsers() in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\Eway.cs:line 138
And my code for Billing.Eway.EwayInternal.SuccessCustomerRenewal:
internal static void SuccessCustomerRenewal(string username, Bill bill, EwayTransaction transaction)
{
// Give them their points!
ApplyBillToCustomerAccount(username, bill, true);
BillingEmail.SendRenewalSuccessEmail(username, bill, transaction);
using (MsSqlDataClassesDataContext msSqlDb = new MsSqlDataClassesDataContext())
{
// TODO: Remove this logging
msSqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MsSQL.txt", true) { AutoFlush = true };
EwayCustomer ewayCustomer = msSqlDb.EwayCustomers.First(c => c.Username == username);
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
using (MySqlDataContext mySqlDb = new MySqlDataContext())
{
// TODO: Remove this logging
mySqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MySQL.txt", true) { AutoFlush = true };
BillingMySqlContext.Customer grasCustomer = mySqlDb.Customers.First(c => c.Username == username);
// Extend their membership date out so that the plan doesn't expire because of a failed credit card charge.
grasCustomer.MembershipDate =
ewayCustomer.NextBillingDate.AddDays(1);
mySqlDb.SubmitChanges(); // <-- This is line 552
}
msSqlDb.SubmitChanges();
}
}
I know that the issue occurs on the mySqlDb.SubmitChanges() line, since that DB context is the one using Devart (Linq solution for MySQL databases): the other context uses pure MS Linq.
Not only is the change written to the MySql DB (inner using block), but it is also written to the MsSql DB (outer using block). But that's where the magical success ends.
If I could I would write a Minimal, Complete and Verifiable example, but strangely I'm unable to generate a Devart ChangeConflictException.
So, why does the change get saved to the database after a Devart.Data.Linq.ChangeConflictException? When I previously encountered System.Data.Linq.ChangeConflictException changes weren't saved.
Edit 1:
I've also now included the .PDB file and gotten line number confirmation of the exact source of the exception.
Edit 2:
I now understand why I can't generate a ChangeConflictException, so how is it happening here?
These are the attributes for MembershipDate:_
[Column(Name = #"Membership_Date", Storage = "_MembershipDate", CanBeNull = false, DbType = "DATETIME NOT NULL", UpdateCheck = UpdateCheck.Never)]
I know I can explicitly force my changes through to override any potential conflict, but that seems undesirable (I don't know what I would be overriding!). Similarly I could wrap the submit in a try block, and retry (re-reading each time) until success, but that seems clunky. How should I deal with this intermittent issue?
Edit 3:
It's not caused by multiple calls. This function is called in one place, by a single-instance app. It creates log entries every time it is run, and they are only getting created once. I have since moved the email call to the top of the method: the email only gets sent once, the exception occurs, and database changes are still made.
I believe it has something to do with the using blocks. Whilst stepping through the debugger on an unrelated issue, I entered the using block, but stopped execution before the SubmitChanges() call. And the changes were still written to the database. My understanding was that using blocks were to ensure resources were cleaned up (connections closed, etc), but it seems that the entire block is being executed. A new avenue to research...
But it still doesn't answer how a ChangeConflictException is even possible given Devart explicitly ignores them.
Edit 4:
So I wasn't going crazy, the database change did get submitted even after I ended execution in the middle of the using block, but it only works for websites.
Edit 5:
As per #Evk's suggestion I've included some DB logging (and updated the stacktrace and code snippet above). The incidence rate of this exception seems to have dropped, as it has only just happened since I implemented the logging. Here are the additional details:
Outer (MS SQL) logfile:
SELECT TOP (1) [t0].[id], [t0].[Username], [t0].[TokenId], [t0].[PlanId], [t0].[SignupDate], [t0].[NextBillingDate], [t0].[PaymentType], [t0].[RetryCount], [t0].[AccountStatus], [t0].[CancelDate]
FROM [dbo].[EwayCustomer] AS [t0]
WHERE [t0].[Username] = #p0
-- #p0: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [dyonis]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.18408a
(It just shows the SELECT call (.First()), none of the updates show).
Inner (MySQL) logfile:
SELECT t1.Customer_ID, t1.Username, t1.Account_Group, t1.Account_Password, t1.First_Name, t1.Last_Name, t1.Account_Type, t1.Points, t1.PromoPoints, t1.Phone, t1.Cell, t1.Email, t1.Address1, t1.Address2, t1.City, t1.State, t1.Country, t1.Postcode, t1.Membership_Group, t1.Suspend_On_Zero_Points, t1.Yahoo_ID, t1.MSN_ID, t1.Skype_ID, t1.Repurchase_Thresh, t1.Active, t1.Delete_Account, t1.Last_Activity, t1.Membership_Expires_After_x_Days, t1.Membership_Date, t1.auth_name, t1.created_by, t1.created_on, t1.AccountGroup_Points_Used, t1.AccountGroup_Points_Threashold, t1.LegacyPoints, t1.Can_Make_Reservation, t1.Gallery_Access, t1.Blog_Access, t1.Private_FTP, t1.Photometrica, t1.Promo_Code, t1.Promo_Expire_DTime, t1.Gift_FirstName, t1.Gift_LastName, t1.Gift_Email, t1.Gift_Phone, t1.Gift_Active, t1.NoMarketingEmail, t1.Can_Schedule, t1.Refered_By, t1.Q1_Hear_About_Us, t1.Q2_Exp_Level, t1.Q3_Intrests, t1.GIS_DTime_UTC, t1.Membership_Expire_Notice_Sent, t1.Promo_Expire_Notice_Sent, t1.isEncrypted, t1.PlanId
FROM grasbill.customers t1
WHERE t1.Username = :p0 LIMIT 1
-- p0: Input VarChar (Size = 6; DbType = AnsiString) [dyonis]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
UPDATE grasbill.customers SET Membership_Date = :p1 WHERE Customer_ID = :key1
-- p1: Input DateTime (Size = 0; DbType = DateTime) [8/3/2016 4:42:53 AM]
-- key1: Input Int (Size = 0; DbType = Int32) [7731]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
(Shows the SELECT and UPDATE calls)
So the log files don't really give any clue as to what's happening, but again the MS SQL database has been updated! The NextBillingDate field has been set correctly, as per this line:
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
If it hadn't been updated, the user would have been billed again on the next timer tick (5 mins later), and I can see from logging that didn't happen.
One other interesting thing to note is the log file timestamps. As you can see from the code above I grab the current (UTC) time for the log filename. Here is the information shown by Windows File Explorer:
The MS SQL logfile was created at 04:42 (UTC) and last modified at 14:42 (UTC+10, Windows local-time), but the MySQL logfile was last modified at 15:23 (UTC+10), 41 minutes after it was created. Now I assume the logfile StreamWriter is closed as soon as it leaves scope. Is this delay an expected side effect of the exception? Did it take 41 minutes for the garbage collector to realise I no longer needed a reference to the StreamWriter? Or is something else going on?
Well 6 months later I finally got to the bottom of this problem. Not sure if it will ever help anyone else, but I'll detail it anyway.
There were 2 problems in play here, and 1 of them was idiocy (as they usually are), but one was legitimately something I did not know or expect.
Problem 1
The reason the changes were magically made to the database even though there was an exception was because the very first line of code in that function ApplyBillToCustomerAccount(username, bill, true); updates the database! <facepalm>
Problem 2
The (Devart) ChangeConflictException isn't only thrown if the data has changed, but also if you're not making any changes. MS SQL stores DateTimes with great precision, but MySQL (or the one I'm running at least) only stores down to seconds. And here's where the intermittency came in. If my database calls were quick enough, or just near the second boundary, they both got rounded to the same time. Devart saw no changes to be written, and threw a ChangeConflictException.
I recently made some optimisations to the database which resulted in far greater responsiveness, and massively increased incidence of this exception. That was one of the clues.
Also I tried changing the Found Rows parameter to true as instructed in the linked Devart post but found it did not help in my case. Or perhaps I did it wrong. Either way now that I've found the source of the issue I can eliminate the duplicate database updates.
I have an MVC3 View that's posting back a set of input values via Ajax to my controller. My controller is then creating a new FieldTripRoute object on my context and attempting to insert it into the database.
I just can't figure out what's going on. I've triple checked my Designer schema and my DB schema and they match perfectly. So it can't be the normal issue of a column not existing or being nullable in one area but not another. However, I keep receiving a "Row Not Found or Changed" exception every time I attempt to submit changes.
The stack trace on the exception looks like this:
at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges()
at ManageMAT.Controllers.FieldTripController.RouteAdd(Int32 id, FormCollection collection)
This is the code that's being called to add the new Route object from the Controller:
[HttpPost]
public ActionResult RouteAdd(int id, FormCollection collection)
{
FieldTrip trip = context.FieldTrips.Single(ft => ft.ID == id);
if (trip == null) return Json(new { success = false, message = "Field trip not found." }); ;
try
{
FieldTripRoute tripRoute = new FieldTripRoute();
tripRoute.FieldTripID = trip.ID;
tripRoute.Date = DateTime.Parse(collection["Date"]);
tripRoute.ArrivalTime = DateTime.Parse(collection["ArrivalTime"] + " " + DateTime.Now.ToShortDateString());
tripRoute.DepartureTime = DateTime.Parse(collection["DepartureTime"] + " " + DateTime.Now.ToShortDateString());
tripRoute.Destination = collection["Destination"];
tripRoute.PickupLocation = collection["PickupLocation"];
tripRoute.RouteID = Convert.ToInt32(collection["RouteID"]);
context.FieldTripRoutes.InsertOnSubmit(tripRoute);
context.SubmitChanges();
return Json(new { success = true, message = "Success!" });
}
catch (Exception ex)
{
return Json(new { success = false, message = ex.Message });
}
}
And here is my Designer and DB Table Columns:
I've also attempted to view the SQL this is outputting in both the logging available on the context object and in SQL Profiler, but it seems to be failing before it's even hitting the Database server.
Edit: Forgot to add one other thing, when I'm initially creating the new FieldTripRoute object at the beginning of the Add action I noticed that it's not retrieving the correct ID from the database identity series. Perhaps this is related?
I've also tried setting the Update Check on every field in the designer to Never just to see if it was some kind of bizarre concurrency collision going on, but I am still receiving the same error.
I'm really at a loss for what could be causing this issue. Any ideas are appreciated.
This message is thrown every time the row is not inserted for whatever reason. For DML statements Linq to SQL checks the number of modified rows. SQL Server returns this count. It is checked to be one.
The big question is why is the count zero and yet no error message is being sent by SQL Server. Start SQL Server profiler and post the SQL that L2S generates. Run the SQL manually and see what happens. Does a row get inserted? Does its identity value get returned?
Edit: More debugging ahead: Shut down SQL server just before you to the SubmitChanges to make sure that the database is not being hit. Lets make sure to cut this branch off the search tree.
Next, step into the Linq to SQL source code to see whats up. If you have R# this is easy: Press ctrl-shift-t, search ChangeProcessor, click and navigate to the "sources from symbol files". Find the function SubmitChanges and put a breakpoint in there. If you don't have R#, you need to dig out some tutorial on the web for this (it's going to take about 5min).
Step through the source to find why the exception is thrown.
I cannot figure out why the HasChanged value of my SqlCacheDependency object is coming back originally from the command execution as false, but somewhere almost immediately after it comes back from the database, the value changes to true.
Sometimes this happens before the item is even inserted into the cache, causing the cache to discard it immediately, sometimes it's after the insert, and I can grab an enumerator which sees the key in the cache but before I even loop to that item in the cache it's been deleted.
SPROC:
ALTER PROCEDURE [dbo].[ntz_dal_ER_X_Note_SelectAllWER_ID]
#ER_ID int
AS
BEGIN
SELECT
ER_X_Note_ID,
ER_ID,
Note_ID
FROM dbo.ER_X_Note e
WHERE
ER_ID = #ER_ID
END
The database is MS SQL Server 2008, broker service is enabled, and SOME output does cache and remain cached. For instance, this one works just fine:
ALTER PROC [dbo].[ntz_dal_GetCacheControllerByEntityName] (
#Name varchar(50)
) AS
BEGIN
SELECT
CacheController_ID,
EntityName,
CacheEnabled,
Expiration
From dbo.CacheController cc
WHERE EntityName = #Name
END
The code which calls the SPROC in question that fails:
DataSet toReturn;
Hashtable paramHash = new Hashtable();
paramHash.Add("ER_ID", _eR_ID.IsNull ? null : _eR_ID.Value.ToString());
string cacheName = BuildCacheString("ntz_dal_ER_X_Note_SelectAllWER_ID", paramHash);
toReturn = (DataSet)GetFromCache(cacheName);
if (toReturn == null)
{
// Set up parameters (1 input and 0 output)
SqlParameter[] arParms = {
new SqlParameter("#ER_ID", _eR_ID),
};
SqlCacheDependency scd;
// Execute query.
toReturn = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, "dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms)
: _dbConnection.ExecuteDataset("dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms);
AddToCache(cacheName, toReturn, scd);
}
return toReturn;
Code that works
const string sprocName = "ntz_dal_GetCacheControllerByEntityName";
string cacheControlPrefix = "CacheController_" + CachePrefix;
CacheControl controller = (CacheControl)_cache[cacheControlPrefix];
if (controller == null)
{
try
{
SqlParameter[] arParms = {
new SqlParameter("#Name", CachePrefix),
};
SqlCacheDependency sqlCacheDependency;
// Execute query.
DataSet result = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, sprocName, out sqlCacheDependency, arParms)
: _dbConnection.ExecuteDataset(sprocName, out sqlCacheDependency, arParms);
controller = result.Tables[0].Rows.Count == 0
? new CacheControl(false)
: new CacheControl(result.Tables[0].Rows[0]);
_cache.Insert(cacheControlPrefix, controller, sqlCacheDependency);
}
catch (Exception ex)
{
// if sproc retreival fails cache the result of false so we don't keep trying
// this is the only case where it can be added with no expiration date
controller = new CacheControl(false);
// direct cache insert, no dependency, no expiration, never try again for this entity
if (HttpContext.Current != null && UseCaching && _cache != null) _cache.Insert(cacheControlPrefix, controller);
}
}
return controller;
The AddToCache method is overloaded and has more tests in it; The direct _cache.Insert in the working method is to bypass those other tests. The working code helps determine if db caching should happen at all.
You can see that when the "non working" data is retrieved initially, all is OK:
But somewhere random beyond that point, in this instance, just stepping into the next method
And yet the data is NOT changing at all; I'm the only one touching this instance of the database.
It was really, really simple, so simple I completely overlooked it.
In this article Creating a Query for Notification, which I DID scour multiple times, it clearly states:
SET Option Settings
When a SELECT statement is executed under a notification request, the
connection that submits the request must have the options for the
connection set as follows:
ANSI_NULLS ON
ANSI_PADDING ON
ANSI_WARNINGS ON
CONCAT_NULL_YIELDS_NULL ON
QUOTED_IDENTIFIER ON
NUMERIC_ROUNDABORT OFF
ARITHABORT ON
Well, I read and re-read and RE-re-read the sproc, and I still didn't see that both ANSI_NULLS and QUOTED_IDENTIFIER were "OFF", not ON.
My dataset is now caching and retaining the data properly without false indicators of change.
I have a hunch that the issue is with your _eR_ID. I think that you should try adding a local variable to the failing procedure that uses an impossible value for _eR_ID, such as -1. I never trust what is going to happen when nulls are involved and I think this could be the source of your problem.
Here is the modified version that I recommend trying:
DataSet toReturn;
Hashtable paramHash = new Hashtable();
int local_er_ID = eR_ID.IsNull ? -1 : _eR_ID.Value;
paramHash.Add("ER_ID", local_eR_ID.ToString());
string cacheName = BuildCacheString("ntz_dal_ER_X_Note_SelectAllWER_ID", paramHash);
toReturn = (DataSet)GetFromCache(cacheName);
if (toReturn == null)
{
// Set up parameters (1 input and 0 output)
SqlParameter[] arParms = {
new SqlParameter("#ER_ID", local_eR_ID),
};
SqlCacheDependency scd;
// Execute query.
toReturn = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, "dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms)
: _dbConnection.ExecuteDataset("dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms);
AddToCache(cacheName, toReturn, scd);
}
return toReturn;
Important
While creating the above code, I think I discovered the source of your problem: when setting the stored proc parameter, you are using _eR_ID but when you set the paramHash you are using _eR_ID.Value.
The code rewrite will solve this problem, but I suspect that this is the root of the problem.
Running into the same issue and finding the same answers online without any help, I was reasearching the xml invalid subscription response from profiler.
I found an example on msdn support site that had a slightly different order of code. When I tried it I realized the problem - Don't open your connection object until after you've created the command object and the cache dependency object. Here is the order you must follow and all will be good:
Be sure to enable notifications (SqlCahceDependencyAdmin) and run SqlDependency.Start first
Create the connection object
Create the command object and assign command text, type, and connection object (any combination of constructors, setting properties, or using CreateCommand).
Create the sql cache dependency object
Open the connection object
Execute the query
Add item to cache using dependency.
If you follow this order, and follow all other requirements on your select statement, don't have any permissions issues, this will work!
I believe the issue has to do with how the .NET framework manages the connection, specifically what settings are set. I tried overriding this in my sql command test but it never worked. This is only a guess - what I do know is changing the order immediately solved the issue.
I was able to piece it together from the following to msdn posts.
This post was one of the more common causes of the invalid subscription, and shows how the .Net client sets the properties that are in contrast to what notification requires.
https://social.msdn.microsoft.com/Forums/en-US/cf3853f3-0ea1-41b9-987e-9922e5766066/changing-default-set-options-forced-by-net?forum=adodotnetdataproviders
Then this post was from a user who, like me, had reduced his code to the simplest format. My original code pattern was similar to his.
https://social.technet.microsoft.com/Forums/windows/en-US/5a29d49b-8c2c-4fe8-b8de-d632a3f60f68/subscriptions-always-invalid-usual-suspects-checked-no-joy?forum=sqlservicebroker
Then I found this post, also a very simple reduction of the problem, only his was a simple issue - needing 2 part name for tables. In his case the suggestion resolved the issue. After looking at his code I noticed the main difference was waiting to open the connection object until AFTER the command object AND the dependency object were created. My only assumption is under the hood (I have not yet started reflector to check so only an assumption) the Connection object is opened differently, or order of events and command happen differently, because of this association.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bc9ca094-a989-4403-82c6-7f608ed462ce/sql-server-not-creating-subscription-for-simple-select-query-when-using-sqlcachedependency?forum=sqlservicebroker
I hope this helps someone else in a similar issue.