EF6 doesn't update table when it claims it has - c#

Problem
Entity Framework reads from the database and then falsely logs that it has written back to the database. I first tried using synchronous code and then async code. Any ideas, please?
Background
As a first step in moving my .NET 4.6.1 MVC site away from ADO.NET to EF6, I referenced the EF library used by a sister project in the same system and have tried to read a record, update one field, and save the record back to the database. Reading is ok, but I'm confused by what happens when I perform an update.
For the case in question, the SQL field starts off as null in an existing record, and later gets this single update to set its value.
The logger used below is Log4Net using ADO.NET. It logs to the same database, different table.
The environments are local VS > remote IIS/SQL Server, and also published from VS so everything is remote IIS Server and SQL Server.
Method
using (var db = new EF_Entities())
{
// Setup the logging.
db.Database.Log = s => log.Debug(s);
// Fetch record from SQL (proven working)
TheTableObject tto = db.TheTableObject.SingleOrDefault(x => x.Id == 1234567);
// Update SQL.
if (tto != null)
{
tto.TheFieldSlashProperty = "A short string";
await db.SaveChangesAsync();
}
}
Result
The initial data read is correct.
The data write does not happen despite what the log says.
The logged output from db.SaveChangesAsync() is this: -
Opened connection asynchronously at 19/03/21 13:09:21 +00:00
Started transaction at 19/03/21 13:09:22 +00:00
UPDATE [dbo].[TheTableObject] SET [TheFieldSlashProperty] = #0 WHERE ([Id] = #1)
-- #0: 'A short string' (Type = String, Size = 50)
-- #1: '1234567' (Type = Int32)
-- Executing asynchronously at 19/03/21 13:09:23 +00:00
-- Completed in 41 ms with result: 1
Committed transaction at 19/03/21 13:09:25 +00:00
Closed connection at 19/03/21 13:09:25 +00:00

Related

MySql Long Running Query Fails on Docker .NET Core: Attempted to read past the end of the stream / Expected to read 4 header bytes but only received 0

I am attempting to query a MySql database with 95M rows with a query that has a where clause on a non-indexed column (please don't judge, I have no control over that part as the server is not ours).
I've tried both MySqlConnector and MySqlClient with the same result. Consistently, after 5 minutes, they both error:
Using MySqlConnector:
Expected to read 4 header bytes but only received 0.
Using MySql.Data.MySqlClient:
Attempted to read past the end of the stream.
This only happens in a docker container (running Docker Desktop on Windows the aspnet:3.1-buster-slim image, but I've tried others with the same result).
Running the same code via a IIS express hosted web api or a console app works fine.
The connection string specifies Connect Timeout=21600; Default Command Timeout=21600; MinPoolSize=0; and I've tried various Min/Max pool size configs and turning pooling off with no luck.
I have tried changing the connection string SslMode to None with no change.
The code to query the data is pretty straight forward:
protected virtual async IAsyncEnumerable<List<object>> GetDataAsync(string connectionString, string sql, int timeout = 21600, IsolationLevel isolationLevel = IsolationLevel.ReadCommitted)
{
await using var conn = new MySqlConnection { ConnectionString = connectionString };
await conn.OpenAsync();
using var trans = await conn.BeginTransactionAsync(isolationLevel);
await using var cmd = new MySqlCommand { Connection = conn, CommandText = sql, CommandTimeout = timeout, Transaction = trans };
await using (var reader = await cmd.ExecuteReaderAsync())
{
while (await reader.ReadAsync())
{
var values = new object[reader.FieldCount];
reader.GetValues(values);
yield return values.Select(v => v is DBNull ? null : v).ToList();
}
}
await trans.CommitAsync();
}
I have tried with and without the transaction - no change.
If I try a simpler query, I get results back w/o issue using that same GetDataAsync method. Even stranger, other long-running queries are working fine too. If I try to do a similar, non-indexed, query on a table with 30M rows, it runs past the 5 minute mark and eventually (over an hour) returns results.
Running show variables yields the following (none of which seem to point to the issue):
connect_timeout 10
delayed_insert_timeout 300
innodb_flush_log_at_timeout 1
innodb_lock_wait_timeout 50
innodb_rollback_on_timeout OFF
interactive_timeout 28800
lock_wait_timeout 31536000
net_read_timeout 30
net_write_timeout 60
rpl_stop_slave_timeout 31536000
slave_net_timeout 3600
wait_timeout 28800
Is there some kind of idle network timeout occurring in the docker container?
Set Keepalive=120 in your connection string; this will send TCP keepalive packets every two minutes (120 seconds) which should stop the connection from being closed. (You may need to adjust the Keepalive value for your particular situation.)
Note that if you're using MySqlConnector on Linux, due to limitations of .NET Core, this option is only implemented on .NET Core 3.0 (or later).

How do I figure out the exact cause of error in EF update?

I have read the other threads on this, and none of them have answers that resolve my current scenario, nor are they similar. My scenario is reproducible on each run of my application, though I can't seem to produce a smaller piece of code that creates this error.
I'm getting the following error:
An exception has been raised that is likely due to a transient failure. If you are connecting to a SQL Azure database consider using SqlAzureExecutionStrategy.
The inner exception says:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
I am not connecting to a SQL Azure database. The connection is to a remote database through VPN, hosted on premises. To give some more context, I'm importing data from an external system, and every time it gets up to a specific record, it always fails when I try to update the entity after creating it. I've tried setting debug logging on in EF and copying the statement it generates into SSMS and running it with the same credentials with no errors. The only differentiating factor between this record and the previous records are the audit fields (time created/modified) and the name, which has changed from 1USD - Holding 99 to 1USD - Holding 100. I actually tested out changing the order which the records get imported, and it always fails at 100 when editing in EF after creation, so there's probably some other underlying issue at hand here. The field itself in the database is handling strings with a higher length than this, including this same process with no errors.
This obviously doesn't seem to actually be a transient failure, nor does it seem to be a connection issue, so how do I find the exact reason why this doesn't work?
Edit: Adding some code below. Also, I've noticed that if I change the name to 1USD - Holding 99 - Test 2, it works without any error despite the name being longer. Automatic ChangeDetection is not enabled for performance reasons.
security = new Security
{
Name = securityName,
IsActive = true,
CreatedAt = DateTime.Now,
CreatedBy = ADMIN_USER,
ModifiedAt = DateTime.Now,
ModifiedBy = ADMIN_USER
};
_repository.Save(security); //Ctx.Set<T>().Add(security); Ctx.SaveChanges();
//some attributes with a foreign key referencing this entity are saved, which is why we update audit fields below, but error occurs regardless if anything additional is saved
security.ModifiedBy = ADMIN_USER;
security.ModifiedAt = DateTime.Now;
_repository.Save(security); //Ctx.Set<T>().Attach(security); Ctx.Entry(security).State = EntityState.Modified; Ctx.SaveChanges();
Edit 2: It definitely seems to be something else other than a connection issue since it's happening for anything ending in a 3 character combination, such as A10, B10, or 10A. 1, 2, or 4 characters seem to be fine. Still have no idea what the actual issue is, however.

Devart ChangeConflictException but values still written to database

I have an intermittent Devart.Data.Linq.ChangeConflictException: Row not found or changed raising it's ugly head. The funny thing is, the change is still written to the database!
The stack trace says:
Devart.Data.Linq.ChangeConflictException: Row not found or changed.
at Devart.Data.Linq.Engine.b4.a(IObjectEntry[] A_0, ConflictMode A_1, a A_2)
at Devart.Data.Linq.Engine.b4.a(ConflictMode A_0)
at Devart.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at Devart.Data.Linq.DataContext.SubmitChanges()
at Billing.Eway.EwayInternal.SuccessCustomerRenewal(String username, Bill bill, EwayTransaction transaction) in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\EwayInternal.cs:line 552
at Billing.Eway.Eway.BillAllUsers() in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\Eway.cs:line 138
And my code for Billing.Eway.EwayInternal.SuccessCustomerRenewal:
internal static void SuccessCustomerRenewal(string username, Bill bill, EwayTransaction transaction)
{
// Give them their points!
ApplyBillToCustomerAccount(username, bill, true);
BillingEmail.SendRenewalSuccessEmail(username, bill, transaction);
using (MsSqlDataClassesDataContext msSqlDb = new MsSqlDataClassesDataContext())
{
// TODO: Remove this logging
msSqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MsSQL.txt", true) { AutoFlush = true };
EwayCustomer ewayCustomer = msSqlDb.EwayCustomers.First(c => c.Username == username);
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
using (MySqlDataContext mySqlDb = new MySqlDataContext())
{
// TODO: Remove this logging
mySqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MySQL.txt", true) { AutoFlush = true };
BillingMySqlContext.Customer grasCustomer = mySqlDb.Customers.First(c => c.Username == username);
// Extend their membership date out so that the plan doesn't expire because of a failed credit card charge.
grasCustomer.MembershipDate =
ewayCustomer.NextBillingDate.AddDays(1);
mySqlDb.SubmitChanges(); // <-- This is line 552
}
msSqlDb.SubmitChanges();
}
}
I know that the issue occurs on the mySqlDb.SubmitChanges() line, since that DB context is the one using Devart (Linq solution for MySQL databases): the other context uses pure MS Linq.
Not only is the change written to the MySql DB (inner using block), but it is also written to the MsSql DB (outer using block). But that's where the magical success ends.
If I could I would write a Minimal, Complete and Verifiable example, but strangely I'm unable to generate a Devart ChangeConflictException.
So, why does the change get saved to the database after a Devart.Data.Linq.ChangeConflictException? When I previously encountered System.Data.Linq.ChangeConflictException changes weren't saved.
Edit 1:
I've also now included the .PDB file and gotten line number confirmation of the exact source of the exception.
Edit 2:
I now understand why I can't generate a ChangeConflictException, so how is it happening here?
These are the attributes for MembershipDate:_
[Column(Name = #"Membership_Date", Storage = "_MembershipDate", CanBeNull = false, DbType = "DATETIME NOT NULL", UpdateCheck = UpdateCheck.Never)]
I know I can explicitly force my changes through to override any potential conflict, but that seems undesirable (I don't know what I would be overriding!). Similarly I could wrap the submit in a try block, and retry (re-reading each time) until success, but that seems clunky. How should I deal with this intermittent issue?
Edit 3:
It's not caused by multiple calls. This function is called in one place, by a single-instance app. It creates log entries every time it is run, and they are only getting created once. I have since moved the email call to the top of the method: the email only gets sent once, the exception occurs, and database changes are still made.
I believe it has something to do with the using blocks. Whilst stepping through the debugger on an unrelated issue, I entered the using block, but stopped execution before the SubmitChanges() call. And the changes were still written to the database. My understanding was that using blocks were to ensure resources were cleaned up (connections closed, etc), but it seems that the entire block is being executed. A new avenue to research...
But it still doesn't answer how a ChangeConflictException is even possible given Devart explicitly ignores them.
Edit 4:
So I wasn't going crazy, the database change did get submitted even after I ended execution in the middle of the using block, but it only works for websites.
Edit 5:
As per #Evk's suggestion I've included some DB logging (and updated the stacktrace and code snippet above). The incidence rate of this exception seems to have dropped, as it has only just happened since I implemented the logging. Here are the additional details:
Outer (MS SQL) logfile:
SELECT TOP (1) [t0].[id], [t0].[Username], [t0].[TokenId], [t0].[PlanId], [t0].[SignupDate], [t0].[NextBillingDate], [t0].[PaymentType], [t0].[RetryCount], [t0].[AccountStatus], [t0].[CancelDate]
FROM [dbo].[EwayCustomer] AS [t0]
WHERE [t0].[Username] = #p0
-- #p0: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [dyonis]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.18408a
(It just shows the SELECT call (.First()), none of the updates show).
Inner (MySQL) logfile:
SELECT t1.Customer_ID, t1.Username, t1.Account_Group, t1.Account_Password, t1.First_Name, t1.Last_Name, t1.Account_Type, t1.Points, t1.PromoPoints, t1.Phone, t1.Cell, t1.Email, t1.Address1, t1.Address2, t1.City, t1.State, t1.Country, t1.Postcode, t1.Membership_Group, t1.Suspend_On_Zero_Points, t1.Yahoo_ID, t1.MSN_ID, t1.Skype_ID, t1.Repurchase_Thresh, t1.Active, t1.Delete_Account, t1.Last_Activity, t1.Membership_Expires_After_x_Days, t1.Membership_Date, t1.auth_name, t1.created_by, t1.created_on, t1.AccountGroup_Points_Used, t1.AccountGroup_Points_Threashold, t1.LegacyPoints, t1.Can_Make_Reservation, t1.Gallery_Access, t1.Blog_Access, t1.Private_FTP, t1.Photometrica, t1.Promo_Code, t1.Promo_Expire_DTime, t1.Gift_FirstName, t1.Gift_LastName, t1.Gift_Email, t1.Gift_Phone, t1.Gift_Active, t1.NoMarketingEmail, t1.Can_Schedule, t1.Refered_By, t1.Q1_Hear_About_Us, t1.Q2_Exp_Level, t1.Q3_Intrests, t1.GIS_DTime_UTC, t1.Membership_Expire_Notice_Sent, t1.Promo_Expire_Notice_Sent, t1.isEncrypted, t1.PlanId
FROM grasbill.customers t1
WHERE t1.Username = :p0 LIMIT 1
-- p0: Input VarChar (Size = 6; DbType = AnsiString) [dyonis]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
UPDATE grasbill.customers SET Membership_Date = :p1 WHERE Customer_ID = :key1
-- p1: Input DateTime (Size = 0; DbType = DateTime) [8/3/2016 4:42:53 AM]
-- key1: Input Int (Size = 0; DbType = Int32) [7731]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
(Shows the SELECT and UPDATE calls)
So the log files don't really give any clue as to what's happening, but again the MS SQL database has been updated! The NextBillingDate field has been set correctly, as per this line:
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
If it hadn't been updated, the user would have been billed again on the next timer tick (5 mins later), and I can see from logging that didn't happen.
One other interesting thing to note is the log file timestamps. As you can see from the code above I grab the current (UTC) time for the log filename. Here is the information shown by Windows File Explorer:
The MS SQL logfile was created at 04:42 (UTC) and last modified at 14:42 (UTC+10, Windows local-time), but the MySQL logfile was last modified at 15:23 (UTC+10), 41 minutes after it was created. Now I assume the logfile StreamWriter is closed as soon as it leaves scope. Is this delay an expected side effect of the exception? Did it take 41 minutes for the garbage collector to realise I no longer needed a reference to the StreamWriter? Or is something else going on?
Well 6 months later I finally got to the bottom of this problem. Not sure if it will ever help anyone else, but I'll detail it anyway.
There were 2 problems in play here, and 1 of them was idiocy (as they usually are), but one was legitimately something I did not know or expect.
Problem 1
The reason the changes were magically made to the database even though there was an exception was because the very first line of code in that function ApplyBillToCustomerAccount(username, bill, true); updates the database! <facepalm>
Problem 2
The (Devart) ChangeConflictException isn't only thrown if the data has changed, but also if you're not making any changes. MS SQL stores DateTimes with great precision, but MySQL (or the one I'm running at least) only stores down to seconds. And here's where the intermittency came in. If my database calls were quick enough, or just near the second boundary, they both got rounded to the same time. Devart saw no changes to be written, and threw a ChangeConflictException.
I recently made some optimisations to the database which resulted in far greater responsiveness, and massively increased incidence of this exception. That was one of the clues.
Also I tried changing the Found Rows parameter to true as instructed in the linked Devart post but found it did not help in my case. Or perhaps I did it wrong. Either way now that I've found the source of the issue I can eliminate the duplicate database updates.

When inserting data using SQLBulkCopy into an Azure SQL Database table I am getting an error message "The wait operation timed out"?

When I insert more than 80000 records into an Azure SQL Database table using the below code:
IEnumerable<SqlBulkCopyColumnMapping> columnMapping;
db.Database.ExecuteSqlCommand("truncate table dbo.Site");
columnMapping = openXmlParse.GetSiteServiceColumnMappings();
bulkCopy.BatchSize = 2000;
bulkCopy.DestinationTableName = "dbo.Site";
bulkCopy.WriteTableToServer(dt, SqlBulkCopyOptions.Default, columnMapping);
db.sp_TrimTableColumns("Site");
In local DB it works fine but an exception is thrown when the code is run against Azure SQL Database.
Explicitly set the command timeout to larger value based on how long it takes. .Net default value is 30 seconds and may not be sufficient for large inserts. The command run time varies on the service objective chosen too.

How can I collect the current SQL Server Session ID from an Entity Framework DbContext?

Is there a way to determine the current SQL Server session ID (##SPID) for an opened DbContext, short of making a SQL query directly to the database?
If there is, is there any guarantee that the SQL Server session ID will remain the same until the DbContext is released and its connection is released back to the Entity Framework connection pool? Something similar to this:
using (MyEntities db = new MyEntities()) {
// the following 3 pieces of code are not existing properties and will result in compilation errors
// I'm just looking for something similar to the following 3 lines
db.CurrentSessionId; //error
db.Database.CurrentSessionId; //error
((IObjectContextAdapter)db).ObjectContext.Connection.CurrentSessionId; //error
// the following code will work, but will this session id be the same until the original DbContext is disposed?
// is there any chance that a db.Database.SqlQuery call will spin off it's own connection from the pool?
short spid = db.Database.SqlQuery<short>("SELECT ##SPID").FirstOrDefault();
}
First of all, The Dbcontext alone will NOT open any sql process on your database. The query does.
so in this case when you run SELECT ##SPID you will definitely open a new process with a new ID.
The good news is Entityframework will use the same process to run your subsequent query. So ideally in the same using block you will always get the same ##SPID value.
You can run this query
select *
from master.dbo.sysprocesses
where program_name = 'EntityFramework'
to observe the current processes on your database associated with Entity Framework.
You can then use the query below to get the SQL statement that associated with specific process. For more information please take a look the accepted answer here: List the queries running on SQL Server
declare
#spid int
, #stmt_start int
, #stmt_end int
, #sql_handle binary(20)
set #spid = XXX -- Fill this in
select top 1
#sql_handle = sql_handle
, #stmt_start = case stmt_start when 0 then 0 else stmt_start / 2 end
, #stmt_end = case stmt_end when -1 then -1 else stmt_end / 2 end
from master.dbo.sysprocesses
where spid = #spid
order by ecid
SELECT
SUBSTRING( text,
COALESCE(NULLIF(#stmt_start, 0), 1),
CASE #stmt_end
WHEN -1
THEN DATALENGTH(text)
ELSE
(#stmt_end - #stmt_start)
END
)
FROM ::fn_get_sql(#sql_handle)

Categories