I have an intermittent Devart.Data.Linq.ChangeConflictException: Row not found or changed raising it's ugly head. The funny thing is, the change is still written to the database!
The stack trace says:
Devart.Data.Linq.ChangeConflictException: Row not found or changed.
at Devart.Data.Linq.Engine.b4.a(IObjectEntry[] A_0, ConflictMode A_1, a A_2)
at Devart.Data.Linq.Engine.b4.a(ConflictMode A_0)
at Devart.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at Devart.Data.Linq.DataContext.SubmitChanges()
at Billing.Eway.EwayInternal.SuccessCustomerRenewal(String username, Bill bill, EwayTransaction transaction) in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\EwayInternal.cs:line 552
at Billing.Eway.Eway.BillAllUsers() in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\Eway.cs:line 138
And my code for Billing.Eway.EwayInternal.SuccessCustomerRenewal:
internal static void SuccessCustomerRenewal(string username, Bill bill, EwayTransaction transaction)
{
// Give them their points!
ApplyBillToCustomerAccount(username, bill, true);
BillingEmail.SendRenewalSuccessEmail(username, bill, transaction);
using (MsSqlDataClassesDataContext msSqlDb = new MsSqlDataClassesDataContext())
{
// TODO: Remove this logging
msSqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MsSQL.txt", true) { AutoFlush = true };
EwayCustomer ewayCustomer = msSqlDb.EwayCustomers.First(c => c.Username == username);
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
using (MySqlDataContext mySqlDb = new MySqlDataContext())
{
// TODO: Remove this logging
mySqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MySQL.txt", true) { AutoFlush = true };
BillingMySqlContext.Customer grasCustomer = mySqlDb.Customers.First(c => c.Username == username);
// Extend their membership date out so that the plan doesn't expire because of a failed credit card charge.
grasCustomer.MembershipDate =
ewayCustomer.NextBillingDate.AddDays(1);
mySqlDb.SubmitChanges(); // <-- This is line 552
}
msSqlDb.SubmitChanges();
}
}
I know that the issue occurs on the mySqlDb.SubmitChanges() line, since that DB context is the one using Devart (Linq solution for MySQL databases): the other context uses pure MS Linq.
Not only is the change written to the MySql DB (inner using block), but it is also written to the MsSql DB (outer using block). But that's where the magical success ends.
If I could I would write a Minimal, Complete and Verifiable example, but strangely I'm unable to generate a Devart ChangeConflictException.
So, why does the change get saved to the database after a Devart.Data.Linq.ChangeConflictException? When I previously encountered System.Data.Linq.ChangeConflictException changes weren't saved.
Edit 1:
I've also now included the .PDB file and gotten line number confirmation of the exact source of the exception.
Edit 2:
I now understand why I can't generate a ChangeConflictException, so how is it happening here?
These are the attributes for MembershipDate:_
[Column(Name = #"Membership_Date", Storage = "_MembershipDate", CanBeNull = false, DbType = "DATETIME NOT NULL", UpdateCheck = UpdateCheck.Never)]
I know I can explicitly force my changes through to override any potential conflict, but that seems undesirable (I don't know what I would be overriding!). Similarly I could wrap the submit in a try block, and retry (re-reading each time) until success, but that seems clunky. How should I deal with this intermittent issue?
Edit 3:
It's not caused by multiple calls. This function is called in one place, by a single-instance app. It creates log entries every time it is run, and they are only getting created once. I have since moved the email call to the top of the method: the email only gets sent once, the exception occurs, and database changes are still made.
I believe it has something to do with the using blocks. Whilst stepping through the debugger on an unrelated issue, I entered the using block, but stopped execution before the SubmitChanges() call. And the changes were still written to the database. My understanding was that using blocks were to ensure resources were cleaned up (connections closed, etc), but it seems that the entire block is being executed. A new avenue to research...
But it still doesn't answer how a ChangeConflictException is even possible given Devart explicitly ignores them.
Edit 4:
So I wasn't going crazy, the database change did get submitted even after I ended execution in the middle of the using block, but it only works for websites.
Edit 5:
As per #Evk's suggestion I've included some DB logging (and updated the stacktrace and code snippet above). The incidence rate of this exception seems to have dropped, as it has only just happened since I implemented the logging. Here are the additional details:
Outer (MS SQL) logfile:
SELECT TOP (1) [t0].[id], [t0].[Username], [t0].[TokenId], [t0].[PlanId], [t0].[SignupDate], [t0].[NextBillingDate], [t0].[PaymentType], [t0].[RetryCount], [t0].[AccountStatus], [t0].[CancelDate]
FROM [dbo].[EwayCustomer] AS [t0]
WHERE [t0].[Username] = #p0
-- #p0: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [dyonis]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.18408a
(It just shows the SELECT call (.First()), none of the updates show).
Inner (MySQL) logfile:
SELECT t1.Customer_ID, t1.Username, t1.Account_Group, t1.Account_Password, t1.First_Name, t1.Last_Name, t1.Account_Type, t1.Points, t1.PromoPoints, t1.Phone, t1.Cell, t1.Email, t1.Address1, t1.Address2, t1.City, t1.State, t1.Country, t1.Postcode, t1.Membership_Group, t1.Suspend_On_Zero_Points, t1.Yahoo_ID, t1.MSN_ID, t1.Skype_ID, t1.Repurchase_Thresh, t1.Active, t1.Delete_Account, t1.Last_Activity, t1.Membership_Expires_After_x_Days, t1.Membership_Date, t1.auth_name, t1.created_by, t1.created_on, t1.AccountGroup_Points_Used, t1.AccountGroup_Points_Threashold, t1.LegacyPoints, t1.Can_Make_Reservation, t1.Gallery_Access, t1.Blog_Access, t1.Private_FTP, t1.Photometrica, t1.Promo_Code, t1.Promo_Expire_DTime, t1.Gift_FirstName, t1.Gift_LastName, t1.Gift_Email, t1.Gift_Phone, t1.Gift_Active, t1.NoMarketingEmail, t1.Can_Schedule, t1.Refered_By, t1.Q1_Hear_About_Us, t1.Q2_Exp_Level, t1.Q3_Intrests, t1.GIS_DTime_UTC, t1.Membership_Expire_Notice_Sent, t1.Promo_Expire_Notice_Sent, t1.isEncrypted, t1.PlanId
FROM grasbill.customers t1
WHERE t1.Username = :p0 LIMIT 1
-- p0: Input VarChar (Size = 6; DbType = AnsiString) [dyonis]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
UPDATE grasbill.customers SET Membership_Date = :p1 WHERE Customer_ID = :key1
-- p1: Input DateTime (Size = 0; DbType = DateTime) [8/3/2016 4:42:53 AM]
-- key1: Input Int (Size = 0; DbType = Int32) [7731]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
(Shows the SELECT and UPDATE calls)
So the log files don't really give any clue as to what's happening, but again the MS SQL database has been updated! The NextBillingDate field has been set correctly, as per this line:
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
If it hadn't been updated, the user would have been billed again on the next timer tick (5 mins later), and I can see from logging that didn't happen.
One other interesting thing to note is the log file timestamps. As you can see from the code above I grab the current (UTC) time for the log filename. Here is the information shown by Windows File Explorer:
The MS SQL logfile was created at 04:42 (UTC) and last modified at 14:42 (UTC+10, Windows local-time), but the MySQL logfile was last modified at 15:23 (UTC+10), 41 minutes after it was created. Now I assume the logfile StreamWriter is closed as soon as it leaves scope. Is this delay an expected side effect of the exception? Did it take 41 minutes for the garbage collector to realise I no longer needed a reference to the StreamWriter? Or is something else going on?
Well 6 months later I finally got to the bottom of this problem. Not sure if it will ever help anyone else, but I'll detail it anyway.
There were 2 problems in play here, and 1 of them was idiocy (as they usually are), but one was legitimately something I did not know or expect.
Problem 1
The reason the changes were magically made to the database even though there was an exception was because the very first line of code in that function ApplyBillToCustomerAccount(username, bill, true); updates the database! <facepalm>
Problem 2
The (Devart) ChangeConflictException isn't only thrown if the data has changed, but also if you're not making any changes. MS SQL stores DateTimes with great precision, but MySQL (or the one I'm running at least) only stores down to seconds. And here's where the intermittency came in. If my database calls were quick enough, or just near the second boundary, they both got rounded to the same time. Devart saw no changes to be written, and threw a ChangeConflictException.
I recently made some optimisations to the database which resulted in far greater responsiveness, and massively increased incidence of this exception. That was one of the clues.
Also I tried changing the Found Rows parameter to true as instructed in the linked Devart post but found it did not help in my case. Or perhaps I did it wrong. Either way now that I've found the source of the issue I can eliminate the duplicate database updates.
Related
I'm trying to push about 150k updates into Mongo database (v 4.2.9 running on Windows, stage replica with two nodes) using BulkWrite on c# driver (v2.11.6) and looks like it is impossible. The project is .Net Framework 4.7.2.
Mongo c# driver documentation is terrible, but somehow on forums and with a lot of googling, I was finnaly able to find a way how to run about 150k updates using a batch, something like this (a little simplified for SO):
client = new MongoClient(connString);
database = client.GetDatabase(db);
// Build all the updates
List<UpdateOneModel<GroupEntry>> updates = new List<UpdateOneModel<GroupEntry>>();
foreach (GroupEntry groupEntry in stats)
{
FilterDefinition<GroupEntry> filter = Builders<GroupEntry>.Filter.Eq(e => e.Key, groupEntry.Key);
UpdateDefinitionBuilder<GroupEntry> update = Builders<GroupEntry>.Update;
var groupEntrySubUpdates = new List<UpdateDefinition<GroupEntry>>();
if (groupEntry.Value.Clicks != 0)
groupEntrySubUpdates.Add(update.Inc(u => u.Value.Clicks, groupEntry.Value.Clicks));
if (groupEntry.Value.Position != 0)
groupEntrySubUpdates.Add(update.Set(u => u.Value.Position, groupEntry.Value.Position));
UpdateOneModel<GroupEntry> groupEntryUpdate = new UpdateOneModel<GroupEntry>(filter, update.Combine(updates));
groupEntryUpdate.IsUpsert = true;
updates.Add(groupEntryUpdate);
}
// Now BulkWrite them in transaction to make sure data are consistent
IClientSessionHandle session = client.StartSession();
session.StartTransaction();
IMongoCollection<GroupEntry> collection = database.GetCollection<GroupEntry>(collectionName);
// Following line FAILS after some time
BulkWriteResult<GroupEntry> bulkWriteResult = collection.BulkWrite(session, updates);
if (!bulkWriteResult.IsAcknowledged)
throw new Exception("Mongo BulkWrite is not acknowledged!");
session.CommitTransaction();
The problem is that I keep getting the following exception:
{
"operationTime":Timestamp(1612737199,
1),
"ok":0.0,
"errmsg":"Exec error resulting in state FAILURE :: caused by :: operation was interrupted",
"code":262,
"codeName":"ExceededTimeLimit",
"$clusterTime":{
"clusterTime":Timestamp(1612737199,
1),
"signature":{
"hash":new BinData(0,
"ljcwS5Gf2JBpEu/OgPFbvRqclLw="")",
"keyId":"NumberLong(""6890288652832735234"")"
}
}
}
Does anyone have any clue? Mongo c# driver docs are completely useless. It looks like I should somehow set property $maxTimeMS, but it is not possible on BulkInsert. I have tried:
Restarts and rebuilds
Different versions of MongoDriver
Set much bigger timeouts for all "timeout" properties on MongoClient and session
Create smaller batches for BulkWrite (up to 1000 items per batch). Fails after 50-100 updates.
Spent hours and hours in useless Mongo docs and Mongo JIRA
So far no luck. The funny thing is, that the same approach works on c# driver 2.10.3 on .Net CORE 3.1 (yes, i tried) even with bigger batches (about 300k updates).
What am I missing?
EDIT:
I tried set maxCommitTime to 25 minutes based on dododo's comments like this:
IClientSessionHandle session = client.StartSession(new ClientSessionOptions()
{
DefaultTransactionOptions = new TransactionOptions(new Optional<ReadConcern>(ReadConcern.Default),
new Optional<ReadPreference>(ReadPreference.Primary),
new Optional<WriteConcern>(WriteConcern.Acknowledged),
new Optional<TimeSpan?>(TimeSpan.FromMinutes(25)))
});
It now throws exception while doing commmit: NoSuchTransaction - Transaction 1 has been aborted.. We checked MongoDB log file and found new error in there:
Aborting transaction with txnNumber 1 on session
09ea7755-7148-43e8-83d8-8bf58c211bda because it has been running for
longer than 'transactionLifetimeLimitSeconds
Based on docs, this is 60 seconds by default. So we set it to 5 minutes and now it works.
So, thank you dododo for pointing me the right direction.
Anyway, it would be really great if Mongo team described errors better and write documentation above basic CRUD operations.
As dododo suggested, this error was manifestation of server closing the transaction, because it took longer then transactionLifetimeLimitSeconds, which is 60 seconds by default. So two things needs to be done:
Set parameter transactionLifetimeLimitSeconds to more than 60 seconds
Set maxCommitTime to higher value. I'm unable to find default value, so I set it to 10 minutes (same as transactionLifetimeLimitSeconds). Set it while starting a session (see the question).
Anyway documentation for this is missing and the error itself was misleading. So I hope it helps anyone who will have to deal with with this.
I have read the other threads on this, and none of them have answers that resolve my current scenario, nor are they similar. My scenario is reproducible on each run of my application, though I can't seem to produce a smaller piece of code that creates this error.
I'm getting the following error:
An exception has been raised that is likely due to a transient failure. If you are connecting to a SQL Azure database consider using SqlAzureExecutionStrategy.
The inner exception says:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
I am not connecting to a SQL Azure database. The connection is to a remote database through VPN, hosted on premises. To give some more context, I'm importing data from an external system, and every time it gets up to a specific record, it always fails when I try to update the entity after creating it. I've tried setting debug logging on in EF and copying the statement it generates into SSMS and running it with the same credentials with no errors. The only differentiating factor between this record and the previous records are the audit fields (time created/modified) and the name, which has changed from 1USD - Holding 99 to 1USD - Holding 100. I actually tested out changing the order which the records get imported, and it always fails at 100 when editing in EF after creation, so there's probably some other underlying issue at hand here. The field itself in the database is handling strings with a higher length than this, including this same process with no errors.
This obviously doesn't seem to actually be a transient failure, nor does it seem to be a connection issue, so how do I find the exact reason why this doesn't work?
Edit: Adding some code below. Also, I've noticed that if I change the name to 1USD - Holding 99 - Test 2, it works without any error despite the name being longer. Automatic ChangeDetection is not enabled for performance reasons.
security = new Security
{
Name = securityName,
IsActive = true,
CreatedAt = DateTime.Now,
CreatedBy = ADMIN_USER,
ModifiedAt = DateTime.Now,
ModifiedBy = ADMIN_USER
};
_repository.Save(security); //Ctx.Set<T>().Add(security); Ctx.SaveChanges();
//some attributes with a foreign key referencing this entity are saved, which is why we update audit fields below, but error occurs regardless if anything additional is saved
security.ModifiedBy = ADMIN_USER;
security.ModifiedAt = DateTime.Now;
_repository.Save(security); //Ctx.Set<T>().Attach(security); Ctx.Entry(security).State = EntityState.Modified; Ctx.SaveChanges();
Edit 2: It definitely seems to be something else other than a connection issue since it's happening for anything ending in a 3 character combination, such as A10, B10, or 10A. 1, 2, or 4 characters seem to be fine. Still have no idea what the actual issue is, however.
I am having an issue where I am looking at a legacy application that is using SqlXmlCommand objects to get data from the database. There is an .xsd file that has the tables that are being used, and what fields, their relationships etc. The issue that we are having is it works most of the time, but not all. I am wondering if there is a way to check what is actually being run on Sql Server. I don't have the SQL profiler installed so that option is out.
the code looks like:
SqlXmlCommand xcmd = new SqlXmlCommand(DataAccess.OleDbConnectionString);
xcmd.CommandType = SqlXmlCommandType.XPath;
xcmd.SchemaPath = Path.GetFullPath(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, #"myXsd.xsd"));
xcmd.XslPath = Path.GetFullPath(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, String.Format(#"myXsl.xsl", ReportType)));
xcmd.CommandText = "id[#PK=$PK]";
SqlXmlParameter p = xcmd.CreateParameter();
p.Name = "#PK";
p.Value = Id;
using (Stream s = xcmd.ExecuteStream()) { ... }
This blows up at the ExectureStream() with the error:
SQLXML: error loading XML result (XML document must have a top level element.)
We believe that there is some data abnormality that is causing the xml to not generate properly, and that is why we want to see what is exactly run.
Cheers
You can try the below two queries, you might need to tweak it a little, but to give you an idea, the first gives you a list of all requests, and the second will give you the detail of the request by its request id (session_id)
SELECT *
FROM sys.dm_exec_requests
DBCC INPUTBUFFER (12345)
Although I would personally rather try and debug the C# app first and view what's being sent over to the server from the VS debugger before bothering with checking what's being run on SQL Server
Also, DBCC INPUTBUFFER might give you something like EXECUTE dbo.MyStoredProc 'params...', to dig deeper, or otherwise a more straightforward query, you can run this
SELECT r.session_id, r.[status], r.command, t.[text]
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) t
I cannot figure out why the HasChanged value of my SqlCacheDependency object is coming back originally from the command execution as false, but somewhere almost immediately after it comes back from the database, the value changes to true.
Sometimes this happens before the item is even inserted into the cache, causing the cache to discard it immediately, sometimes it's after the insert, and I can grab an enumerator which sees the key in the cache but before I even loop to that item in the cache it's been deleted.
SPROC:
ALTER PROCEDURE [dbo].[ntz_dal_ER_X_Note_SelectAllWER_ID]
#ER_ID int
AS
BEGIN
SELECT
ER_X_Note_ID,
ER_ID,
Note_ID
FROM dbo.ER_X_Note e
WHERE
ER_ID = #ER_ID
END
The database is MS SQL Server 2008, broker service is enabled, and SOME output does cache and remain cached. For instance, this one works just fine:
ALTER PROC [dbo].[ntz_dal_GetCacheControllerByEntityName] (
#Name varchar(50)
) AS
BEGIN
SELECT
CacheController_ID,
EntityName,
CacheEnabled,
Expiration
From dbo.CacheController cc
WHERE EntityName = #Name
END
The code which calls the SPROC in question that fails:
DataSet toReturn;
Hashtable paramHash = new Hashtable();
paramHash.Add("ER_ID", _eR_ID.IsNull ? null : _eR_ID.Value.ToString());
string cacheName = BuildCacheString("ntz_dal_ER_X_Note_SelectAllWER_ID", paramHash);
toReturn = (DataSet)GetFromCache(cacheName);
if (toReturn == null)
{
// Set up parameters (1 input and 0 output)
SqlParameter[] arParms = {
new SqlParameter("#ER_ID", _eR_ID),
};
SqlCacheDependency scd;
// Execute query.
toReturn = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, "dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms)
: _dbConnection.ExecuteDataset("dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms);
AddToCache(cacheName, toReturn, scd);
}
return toReturn;
Code that works
const string sprocName = "ntz_dal_GetCacheControllerByEntityName";
string cacheControlPrefix = "CacheController_" + CachePrefix;
CacheControl controller = (CacheControl)_cache[cacheControlPrefix];
if (controller == null)
{
try
{
SqlParameter[] arParms = {
new SqlParameter("#Name", CachePrefix),
};
SqlCacheDependency sqlCacheDependency;
// Execute query.
DataSet result = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, sprocName, out sqlCacheDependency, arParms)
: _dbConnection.ExecuteDataset(sprocName, out sqlCacheDependency, arParms);
controller = result.Tables[0].Rows.Count == 0
? new CacheControl(false)
: new CacheControl(result.Tables[0].Rows[0]);
_cache.Insert(cacheControlPrefix, controller, sqlCacheDependency);
}
catch (Exception ex)
{
// if sproc retreival fails cache the result of false so we don't keep trying
// this is the only case where it can be added with no expiration date
controller = new CacheControl(false);
// direct cache insert, no dependency, no expiration, never try again for this entity
if (HttpContext.Current != null && UseCaching && _cache != null) _cache.Insert(cacheControlPrefix, controller);
}
}
return controller;
The AddToCache method is overloaded and has more tests in it; The direct _cache.Insert in the working method is to bypass those other tests. The working code helps determine if db caching should happen at all.
You can see that when the "non working" data is retrieved initially, all is OK:
But somewhere random beyond that point, in this instance, just stepping into the next method
And yet the data is NOT changing at all; I'm the only one touching this instance of the database.
It was really, really simple, so simple I completely overlooked it.
In this article Creating a Query for Notification, which I DID scour multiple times, it clearly states:
SET Option Settings
When a SELECT statement is executed under a notification request, the
connection that submits the request must have the options for the
connection set as follows:
ANSI_NULLS ON
ANSI_PADDING ON
ANSI_WARNINGS ON
CONCAT_NULL_YIELDS_NULL ON
QUOTED_IDENTIFIER ON
NUMERIC_ROUNDABORT OFF
ARITHABORT ON
Well, I read and re-read and RE-re-read the sproc, and I still didn't see that both ANSI_NULLS and QUOTED_IDENTIFIER were "OFF", not ON.
My dataset is now caching and retaining the data properly without false indicators of change.
I have a hunch that the issue is with your _eR_ID. I think that you should try adding a local variable to the failing procedure that uses an impossible value for _eR_ID, such as -1. I never trust what is going to happen when nulls are involved and I think this could be the source of your problem.
Here is the modified version that I recommend trying:
DataSet toReturn;
Hashtable paramHash = new Hashtable();
int local_er_ID = eR_ID.IsNull ? -1 : _eR_ID.Value;
paramHash.Add("ER_ID", local_eR_ID.ToString());
string cacheName = BuildCacheString("ntz_dal_ER_X_Note_SelectAllWER_ID", paramHash);
toReturn = (DataSet)GetFromCache(cacheName);
if (toReturn == null)
{
// Set up parameters (1 input and 0 output)
SqlParameter[] arParms = {
new SqlParameter("#ER_ID", local_eR_ID),
};
SqlCacheDependency scd;
// Execute query.
toReturn = _dbTransaction != null
? _dbConnection.ExecuteDataset(_dbTransaction, "dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms)
: _dbConnection.ExecuteDataset("dbo.[ntz_dal_ER_X_Note_SelectAllWER_ID]", out scd, arParms);
AddToCache(cacheName, toReturn, scd);
}
return toReturn;
Important
While creating the above code, I think I discovered the source of your problem: when setting the stored proc parameter, you are using _eR_ID but when you set the paramHash you are using _eR_ID.Value.
The code rewrite will solve this problem, but I suspect that this is the root of the problem.
Running into the same issue and finding the same answers online without any help, I was reasearching the xml invalid subscription response from profiler.
I found an example on msdn support site that had a slightly different order of code. When I tried it I realized the problem - Don't open your connection object until after you've created the command object and the cache dependency object. Here is the order you must follow and all will be good:
Be sure to enable notifications (SqlCahceDependencyAdmin) and run SqlDependency.Start first
Create the connection object
Create the command object and assign command text, type, and connection object (any combination of constructors, setting properties, or using CreateCommand).
Create the sql cache dependency object
Open the connection object
Execute the query
Add item to cache using dependency.
If you follow this order, and follow all other requirements on your select statement, don't have any permissions issues, this will work!
I believe the issue has to do with how the .NET framework manages the connection, specifically what settings are set. I tried overriding this in my sql command test but it never worked. This is only a guess - what I do know is changing the order immediately solved the issue.
I was able to piece it together from the following to msdn posts.
This post was one of the more common causes of the invalid subscription, and shows how the .Net client sets the properties that are in contrast to what notification requires.
https://social.msdn.microsoft.com/Forums/en-US/cf3853f3-0ea1-41b9-987e-9922e5766066/changing-default-set-options-forced-by-net?forum=adodotnetdataproviders
Then this post was from a user who, like me, had reduced his code to the simplest format. My original code pattern was similar to his.
https://social.technet.microsoft.com/Forums/windows/en-US/5a29d49b-8c2c-4fe8-b8de-d632a3f60f68/subscriptions-always-invalid-usual-suspects-checked-no-joy?forum=sqlservicebroker
Then I found this post, also a very simple reduction of the problem, only his was a simple issue - needing 2 part name for tables. In his case the suggestion resolved the issue. After looking at his code I noticed the main difference was waiting to open the connection object until AFTER the command object AND the dependency object were created. My only assumption is under the hood (I have not yet started reflector to check so only an assumption) the Connection object is opened differently, or order of events and command happen differently, because of this association.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/bc9ca094-a989-4403-82c6-7f608ed462ce/sql-server-not-creating-subscription-for-simple-select-query-when-using-sqlcachedependency?forum=sqlservicebroker
I hope this helps someone else in a similar issue.
I've investigated the possibilities of creating database backups through SMO with C#.
The task is quite easy and code straightforward. I've got only one question: how can I check if the backup was really created?
SqlBackup.SqlBackup method returns no parameters and I don't even know if it throws any exceptions. (the only thing that I know is that it is blocking, because there's also SqlBackupAsync method)
I would appreciate any help.
you can and its very possible to do what you asked for,
but doing the backup it self using SMO its not very hard, but the hard part is managing the backup and the restore.
it would be hard to put all the code here, but its wont fit. so I will try my best to put the lines you need.
SqlBackup.SqlBackup doesn't return any value, its a void function.
but it takes one parameter which is "Server", try out the following code:
Server srvSql;
//Connect to Server using your authentication method and load the databases in srvSql
// THEN
Backup bkpDatabase = new Backup();
bkpDatabase.Action = BackupActionType.Database;
bkpDatabase.Incremental = true; // will take an incemental backup
bkpDatabase.Incremental = false; // will take a Full backup
bkpDatabase.Database = "your DB name";
BackupDeviceItem bDevice = new BackupDeviceItem("Backup.bak", DeviceType.File);
bkpDatabase.Devices.Add(bDevice );
bkpDatabase.PercentCompleteNotification = 1;// this for progress
bkpDatabase.SqlBackup(srvSql);
bkpDatabase.Devices.Clear();
I've investigated the problem using Reflector.NET (I suppose this is legal since RedGate is Ms Gold Certified Partner and Reflector.NET opens .NET libraries out of the box). As I found out the method throws two types of exceptions:
FailedOperationException - in most cases, other exceptions are "translated" (I suppose translating means creating new FailedOperationException and setting InnerException to what was actually thrown)
UnsupportedVersionException - in one case when log truncation is set to TruncateOnly and server major version is more or equal to 10 (which is sql server 2008?)
This solves my problem partially, because I'm not 100% sure that if something goes wrong those exceptions will actually be thrown.