Azure Mobile Server SDK: When the IQueryable is evaluated? - c#

I have a backend on MS Azure built on top of Azure Mobile App Service SDK (namespace Microsoft.Azure.Mobile.Server.Tables and so on).
It is running ASP.NET MVC over a SQL Server database, in C#.
I have scaffolded my Controllers and I have the method GetAllTodoItems that returns an IQueryable<TodoItem>.
When exactly is this IQueryable evaluated?
I have set up a performance load test and the average request takes 46 seconds to complete, while my visible code and the SQL query takes maximum 5ms!!
What am I missing?
EDIT ====================
Here is my GetAllTodoItems method, together with dependencies:
protected IQueryable<TModelDTO> GetAllEntities()
{
IQueryable<TModel> allEntitiesQuery = Query();
IQueryable<string> visibleObj = context.VisibleObjs(GetUserID(), AttType);
IQueryable<TModel> finalQuery = from item in allEntitiesQuery
join visib in visibleObj on item.Id equals visib
select item;
return finalQuery.Select(Selector).AsQueryable();
}
IQueryable<string> VisibleObjs(string userID, AttachmentType type)
{
return (from ud in UserDesktops
join a in Attachments on ud.DesktopId equals a.ParentDesktop
where (ud.UserId == userID) && (a.AttachmentType == type))
select a.Id);
}
protected Func<TModel, TModelDTO> Selector { get { return d => ToDTO(d); } }
protected override TModelDTO ToDTO(TModel input)
{
return new TModelDTO(input);
}
public TModelDTO(TModel entity)
{
// all basic properties copied:
Content = entity.Content;
Width = entity.Width;
Color = entity.Color;
HighResImageContent = entity.HighResImageContent;
ImageContent = entity.ImageContent;
MaskPath = entity.MaskPath;
MinHeight = entity.MinHeight;
IsComment = entity.IsComment;
IsInkNote = entity.IsInkNote;
}

In this case it could be executing in several locations. Pull the GetUserID method out and put it in a variable above.
var userId = GetUserID();
IQueryable<string> visibleObj = context.VisibleObjs(userId, AttType);
That may solve your performance problem right there - perhaps it is executing that SQL separately, then joining in memory.
In addition, context.VisibleObjs - is context the same context used by Query()? If it is a different context, this won't use SQL to join. You should be getting the context in the Initialize method of the controller and storing it in a class variable there.
Also, what type is AttachmentType? Is it an enum? Perhaps needs cast to an int explicitly? Need more info there.
When exactly is this IQueryable evaluated?
The point at which we want the SQL to run is when it is iterated. In the code above it actually should not run within this method if written correctly. When it is iterated, all expressions before finalQuery.Select(Selector) should be translated into SQL. The Selector method obviously can't be run on the database, so at that time it requires to run the SQL as the query unwinds itself.
The query will unwind itself during serialization.
What does this mean? Well you've handed back to the API an IQueryable object made up of an Expression Tree. The Table Service framework may add some filters or sorts as requested by the web client (see supported query operators). After doing that the web api framework (which called into your controller) will enumerable the IQueryable triggering execution as it writes out JSON(?).
We need to know what SQL is actually running. That's key to working with Linq to SQL / EF.
When faced with troubleshooting Linq to SQL I often put a logger on the context's database. context.Database.Log = Console.Write is the quick solution I use. With a TableController, you would want context.Database.Log = a => this.Configuration.Services.GetTraceWriter().Info(a); in the Initialize method of your controller - where the context is initialized.
Then simply take a look at the log.
I mocked up tables, schema, etc into a TableController and ran this myself, then went through the output with the Logging hooked up, so lets take a look at what's happening:
iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:51543/tables/TodoItem?ZUMO-API-VERSION=2.0.0, Message='http://localhost:51543/tables/TodoItem?ZUMO-API-VERSION=2.0.0'
iisexpress.exe Information: 0 : Message='TodoItem', Operation=DefaultHttpControllerSelector.SelectController
iisexpress.exe Information: 0 : Message='maqsService.Controllers.TodoItemController', Operation=DefaultHttpControllerActivator.Create
iisexpress.exe Information: 0 : Message='maqsService.Controllers.TodoItemController', Operation=HttpControllerDescriptor.CreateController
iisexpress.exe Information: 0 : Message='Selected action 'GetAllTodoItems()'', Operation=ApiControllerActionSelector.SelectAction
iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync
iisexpress.exe Information: 0 : Operation=TableQueryFilter.OnActionExecutingAsync
iisexpress.exe Information: 0 : Operation=EnableQueryAttribute.OnActionExecutingAsync
iisexpress.exe Information: 0 : Operation=TableControllerConfigAttribute.OnActionExecutingAsync
'iisexpress.exe' (CLR v4.0.30319: /LM/W3SVC/2/ROOT-1-131799606929250512): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Numerics\v4.0_4.0.0.0__b77a5c561934e089\System.Numerics.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'iisexpress.exe' (CLR v4.0.30319: /LM/W3SVC/2/ROOT-1-131799606929250512): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_32\System.Data.OracleClient\v4.0_4.0.0.0__b77a5c561934e089\System.Data.OracleClient.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
Next is the line in the log indicating that it got back the IQueryable:
iisexpress.exe Information: 0 : Message='Action returned 'System.Linq.Enumerable+WhereSelectEnumerableIterator`2[maqsService.DataObjects.TodoItem,maqsService.Controllers.TodoItemDTO]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync
Note, no SQL has been executed. It now identifies that it wants to serialize that in Json:
iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance
iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate
iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK)
iisexpress.exe Information: 0 : Operation=TableControllerConfigAttribute.OnActionExecutedAsync, Status=200 (OK)
Now the JsonSerializer is going to serialize the IQueryable, and to do so it needs to enumerate it.
iisexpress.exe Information: 0 : Message='Opened connection at 8/28/2018 4:11:48 PM -04:00
'
'iisexpress.exe' (CLR v4.0.30319: /LM/W3SVC/2/ROOT-1-131799606929250512): Loaded 'EntityFrameworkDynamicProxies-maqsService'.
iisexpress.exe Information: 0 : Message='SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Text] AS [Text],
[Extent1].[Complete] AS [Complete],
[Extent1].[AttachmentId] AS [AttachmentId],
[Extent1].[Version] AS [Version],
[Extent1].[CreatedAt] AS [CreatedAt],
[Extent1].[UpdatedAt] AS [UpdatedAt],
[Extent1].[Deleted] AS [Deleted]
FROM [dbo].[TodoItems] AS [Extent1]
INNER JOIN (SELECT [Extent2].[UserId] AS [UserId], [Extent3].[Id] AS [Id1], [Extent3].[AttachmentType] AS [AttachmentType]
FROM [dbo].[UserDesktops] AS [Extent2]
INNER JOIN [dbo].[Attachments] AS [Extent3] ON [Extent2].[DesktopId] = [Extent3].[ParentDesktop] ) AS [Join1] ON [Extent1].[Id] = [Join1].[Id1]
WHERE (([Join1].[UserId] = #p__linq__0) OR (([Join1].[UserId] IS NULL) AND (#p__linq__0 IS NULL))) AND ([Join1].[AttachmentType] = #p__linq__1)'
iisexpress.exe Information: 0 : Message='
'
iisexpress.exe Information: 0 : Message='-- p__linq__0: 'dana' (Type = String, Size = 4000)
'
iisexpress.exe Information: 0 : Message='-- p__linq__1: '1' (Type = Int32, IsNullable = false)
'
iisexpress.exe Information: 0 : Message='-- Executing at 8/28/2018 4:11:48 PM -04:00
'
iisexpress.exe Information: 0 : Message='-- Completed in 7 ms with result: SqlDataReader
'
iisexpress.exe Information: 0 : Message='
'
iisexpress.exe Information: 0 : Message='Closed connection at 8/28/2018 4:11:48 PM -04:00
'
SQL complete.
iisexpress.exe Information: 0 : Operation=EnableQueryAttribute.OnActionExecutedAsync, Status=200 (OK)
iisexpress.exe Information: 0 : Operation=TableQueryFilter.OnActionExecutedAsync, Status=200 (OK)
iisexpress.exe Information: 0 : Operation=TodoItemController.ExecuteAsync, Status=200 (OK)
iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:51543/tables/TodoItem?ZUMO-API-VERSION=2.0.0, Message='Content-type='application/json; charset=utf-8', content-length=unknown'
iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync
iisexpress.exe Information: 0 : Operation=TodoItemController.Dispose
I'm not sure what the performance problems are. While my table has no data I see that it is properly rolling everything up into one SQL. Something that could be different? The type for AttachmentType. I used an int.
What else?
If you want to own the stack and truly understand what's going on under the sheets, There is a series of OData framework filters put on these that are up the call stack also.
This is the stack during the call to the TModelDTO (TodoItemDTO in my example) constructor, which is called during iteration over the SQL result. Note that the stack doesn't have our controller method in in. We've long since handed back the IQueryable. This is back in the framework code where it's actually using that IQueryable, which I'm intercepting because our Select method calls into the DTO to transform it.
maqsService.dll!maqsService.Controllers.TodoItemDTO.TodoItemDTO(maqsService.DataObjects.TodoItem entity) Line 89 C#
maqsService.dll!maqsService.Controllers.TodoItemController.ToDTO(maqsService.DataObjects.TodoItem input) Line 55 C#
maqsService.dll!maqsService.Controllers.TodoItemController.get_Selector.AnonymousMethod__6_0(maqsService.DataObjects.TodoItem d) Line 51 C#
> System.Core.dll!System.Linq.Enumerable.WhereSelectEnumerableIterator<maqsService.DataObjects.TodoItem, maqsService.Controllers.TodoItemDTO>.MoveNext() Unknown
System.Core.dll!System.Linq.Buffer<maqsService.Controllers.TodoItemDTO>.Buffer(System.Collections.Generic.IEnumerable<maqsService.Controllers.TodoItemDTO> source) Unknown
System.Core.dll!System.Linq.OrderedEnumerable<maqsService.Controllers.TodoItemDTO>.GetEnumerator() Unknown
System.Core.dll!System.Linq.Enumerable.TakeIterator<maqsService.Controllers.TodoItemDTO>(System.Collections.Generic.IEnumerable<maqsService.Controllers.TodoItemDTO> source, int count) Unknown
mscorlib.dll!System.Collections.Generic.List<maqsService.Controllers.TodoItemDTO>.List(System.Collections.Generic.IEnumerable<maqsService.Controllers.TodoItemDTO> collection) Line 99 C#
It appears that here, OData evaluating the $top query parameter for a page number does actually then put the results into a list.
System.Web.Http.OData.dll!System.Web.Http.OData.Query.TruncatedCollection<maqsService.Controllers.TodoItemDTO>.TruncatedCollection(System.Linq.IQueryable<maqsService.Controllers.TodoItemDTO> source, int pageSize) Unknown
System.Web.Http.OData.dll!System.Web.Http.OData.Query.ODataQueryOptions.LimitResults<maqsService.Controllers.TodoItemDTO>(System.Linq.IQueryable<maqsService.Controllers.TodoItemDTO> queryable, int limit, out bool resultsLimited) Unknown
I believe the native transition here is SQL related. I can't find exactly whats up in the decompiled source though.
[Native to Managed Transition]
[Managed to Native Transition]
System.Web.Http.OData.dll!System.Web.Http.OData.Query.ODataQueryOptions.LimitResults(System.Linq.IQueryable queryable, int limit, out bool resultsLimited) Unknown
System.Web.Http.OData.dll!System.Web.Http.OData.Query.ODataQueryOptions.ApplyTo(System.Linq.IQueryable query, System.Web.Http.OData.Query.ODataQuerySettings querySettings) Unknown
System.Web.Http.OData.dll!System.Web.Http.OData.EnableQueryAttribute.ApplyQuery(System.Linq.IQueryable queryable, System.Web.Http.OData.Query.ODataQueryOptions queryOptions) Unknown
System.Web.Http.OData.dll!System.Web.Http.OData.EnableQueryAttribute.ExecuteQuery(object response, System.Net.Http.HttpRequestMessage request, System.Web.Http.Controllers.HttpActionDescriptor actionDescriptor) Unknown
System.Web.Http.OData.dll!System.Web.Http.OData.EnableQueryAttribute.OnActionExecuted(System.Web.Http.Filters.HttpActionExecutedContext actionExecutedContext) Unknown
To dig even deeper, how does the framework know what to do with the IQueryable anyhow?! Well when it iterates it, the IQueryable uses its IQueryProvider to parse out the contained Expression (note this is not compiled code, this is a tree of method calls and operators which you added using your joins and where clauses). It transforms that tree into SQL (in this case) as best it can. When it hits something it can't translate it throws up an error or finds a way to work around.
Gaining a deep understanding of IQueryProvider is a fairly complex computer science task. You could get started here with a walkthrough of creating a Query Provider. Once upon a time I wrote a query provider to transform Linq expressions into Ektron CMS API calls, and you could take a look at that here. I wrote a pretty good summary with links to key areas.
I hope that helped. Not sure what else I could have gone deeper into, and thank you for teaching me something new today. I had no clue what this mobile table API is (still not clear on the point of it)

Linq is lazy, which means that it is executed actually when you try to access the enumeration like in a foreach loop or some extension methods like .ToList() or .ToArray(). When you define the linq query in your method, than it is simple a "preparation" what should be done, when accessing the result. The preparation of the query takes just a little moment in contrast of the execution. This is the reason why you see that your own code runs in a few milliseconds. It's only a preparation. Finally when you access the result, the query is executed actually, e.g. when asp.net serializes your data to build the response of a request.
In your case you try to build a case insensitive filter
where (ud.UserId.Equals(userID, StringComparison.InvariantCultureIgnoreCase)
in VisibleObjs() method. The Equals(userID, StringComparison.InvariantCultureIgnoreCase) call seems to enforce EF to query/return all data from the table before processing the filter when executed. The evaluation of your filter is executed on the client side instead of using a case insensitive search on the sqlserver. One possible solution can be to mark your sqlserver column UserDesktops.UserId in the database with a collation "SQL_Latin1_General_CP1_CI_AS" where CI means CaseInsensitive. After that you should replace your filter by
where (ud.UserId == userID)
or something similar without using any .Net methods to allow EF to translate your linq filter to a plain sql comparison. In this case the case insensitive filter is processed by sqlserver directly without requesting the full table from sqlserver and filtering on client side.

For the action which returns IQueryable data in ApiController, the Web API will do ToList operation and then serializes the list value, finally writes the serialized the list into the response body, and the response status code is 200 (OK).
Only when we execute the "ToList" method of IQueryable, the data in database is actually taken and the "Excute" method in "IQueryProvider" is executed. (parses the expression, and then executes to get the result).
As you said that it takes 46 seconds to complete, i guess that you do some time-consuming operation with IQueryable, for example: Taking IEnumerable first and then filter the data will cause performance problems.
You can provide us with more detailed code for further research.
Hope this was helpful.

Related

PostgreSQL Log - parameter details [duplicate]

Is there a way to dump the generated sql to the Debug log or something? I'm using it in a winforms solution so the mini-profiler idea won't work for me.
I got the same issue and implemented some code after doing some search but having no ready-to-use stuff. There is a package on nuget MiniProfiler.Integrations I would like to share.
Update V2: it supports to work with other database servers, for MySQL it requires to have MiniProfiler.Integrations.MySql
Below are steps to work with SQL Server:
1.Instantiate the connection
var factory = new SqlServerDbConnectionFactory(_connectionString);
using (var connection = ProfiledDbConnectionFactory.New(factory, CustomDbProfiler.Current))
{
// your code
}
2.After all works done, write all commands to a file if you want
File.WriteAllText("SqlScripts.txt", CustomDbProfiler.Current.ProfilerContext.BuildCommands());
Dapper does not currently have an instrumentation point here. This is perhaps due, as you note, to the fact that we (as the authors) use mini-profiler to handle this. However, if it helps, the core parts of mini-profiler are actually designed to be architecture neutral, and I know of other people using it with winforms, wpf, wcf, etc - which would give you access to the profiling / tracing connection wrapper.
In theory, it would be perfectly possible to add some blanket capture-point, but I'm concerned about two things:
(primarily) security: since dapper doesn't have a concept of a context, it would be really really easy for malign code to attach quietly to sniff all sql traffic that goes via dapper; I really don't like the sound of that (this isn't an issue with the "decorator" approach, as the caller owns the connection, hence the logging context)
(secondary) performance: but... in truth, it is hard to say that a simple delegate-check (which would presumably be null in most cases) would have much impact
Of course, the other thing you could do is: steal the connection wrapper code from mini-profiler, and replace the profiler-context stuff with just: Debug.WriteLine etc.
You should consider using SQL profiler located in the menu of SQL Management Studio → Extras → SQL Server Profiler (no Dapper extensions needed - may work with other RDBMS when they got a SQL profiler tool too).
Then, start a new session.
You'll get something like this for example (you see all parameters and the complete SQL string):
exec sp_executesql N'SELECT * FROM Updates WHERE CAST(Product_ID as VARCHAR(50)) = #appId AND (Blocked IS NULL OR Blocked = 0)
AND (Beta IS NULL OR Beta = 0 OR #includeBeta = 1) AND (LangCode IS NULL OR LangCode IN (SELECT * FROM STRING_SPLIT(#langCode, '','')))',N'#appId nvarchar(4000),#includeBeta bit,#langCode nvarchar(4000)',#appId=N'fea5b0a7-1da6-4394-b8c8-05e7cb979161',#includeBeta=0,#langCode=N'de'
Try Dapper.Logging.
You can get it from NuGet. The way it works is you pass your code that creates your actual database connection into a factory that creates wrapped connections. Whenever a wrapped connection is opened or closed or you run a query against it, it will be logged. You can configure the logging message templates and other settings like whether SQL parameters are saved. Elapsed time is also saved.
In my opinion, the only downside is that the documentation is sparse, but I think that's just because it's a new project (as of this writing). I had to dig through the repo for a bit to understand it and to get it configured to my liking, but now it's working great.
From the documentation:
The tool consists of simple decorators for the DbConnection and
DbCommand which track the execution time and write messages to the
ILogger<T>. The ILogger<T> can be handled by any logging framework
(e.g. Serilog). The result is similar to the default EF Core logging
behavior.
The lib declares a helper method for registering the
IDbConnectionFactory in the IoC container. The connection factory is
SQL Provider agnostic. That's why you have to specify the real factory
method:
services.AddDbConnectionFactory(prv => new SqlConnection(conStr));
After registration, the IDbConnectionFactory can be injected into
classes that need a SQL connection.
private readonly IDbConnectionFactory _connectionFactory;
public GetProductsHandler(IDbConnectionFactory connectionFactory)
{
_connectionFactory = connectionFactory;
}
The IDbConnectionFactory.CreateConnection will return a decorated
version that logs the activity.
using (DbConnection db = _connectionFactory.CreateConnection())
{
//...
}
This is not exhaustive and is essentially a bit of hack, but if you have your SQL and you want to initialize your parameters, it's useful for basic debugging. Set up this extension method, then call it anywhere as desired.
public static class DapperExtensions
{
public static string ArgsAsSql(this DynamicParameters args)
{
if (args is null) throw new ArgumentNullException(nameof(args));
var sb = new StringBuilder();
foreach (var name in args.ParameterNames)
{
var pValue = args.Get<dynamic>(name);
var type = pValue.GetType();
if (type == typeof(DateTime))
sb.AppendFormat("DECLARE #{0} DATETIME ='{1}'\n", name, pValue.ToString("yyyy-MM-dd HH:mm:ss.fff"));
else if (type == typeof(bool))
sb.AppendFormat("DECLARE #{0} BIT = {1}\n", name, (bool)pValue ? 1 : 0);
else if (type == typeof(int))
sb.AppendFormat("DECLARE #{0} INT = {1}\n", name, pValue);
else if (type == typeof(List<int>))
sb.AppendFormat("-- REPLACE #{0} IN SQL: ({1})\n", name, string.Join(",", (List<int>)pValue));
else
sb.AppendFormat("DECLARE #{0} NVARCHAR(MAX) = '{1}'\n", name, pValue.ToString());
}
return sb.ToString();
}
}
You can then just use this in the immediate or watch windows to grab the SQL.
Just to add an update here since I see this question still get's quite a few hits - these days I use either Glimpse (seems it's dead now) or Stackify Prefix which both have sql command trace capabilities.
It's not exactly what I was looking for when I asked the original question but solve the same problem.

EF6 doesn't update table when it claims it has

Problem
Entity Framework reads from the database and then falsely logs that it has written back to the database. I first tried using synchronous code and then async code. Any ideas, please?
Background
As a first step in moving my .NET 4.6.1 MVC site away from ADO.NET to EF6, I referenced the EF library used by a sister project in the same system and have tried to read a record, update one field, and save the record back to the database. Reading is ok, but I'm confused by what happens when I perform an update.
For the case in question, the SQL field starts off as null in an existing record, and later gets this single update to set its value.
The logger used below is Log4Net using ADO.NET. It logs to the same database, different table.
The environments are local VS > remote IIS/SQL Server, and also published from VS so everything is remote IIS Server and SQL Server.
Method
using (var db = new EF_Entities())
{
// Setup the logging.
db.Database.Log = s => log.Debug(s);
// Fetch record from SQL (proven working)
TheTableObject tto = db.TheTableObject.SingleOrDefault(x => x.Id == 1234567);
// Update SQL.
if (tto != null)
{
tto.TheFieldSlashProperty = "A short string";
await db.SaveChangesAsync();
}
}
Result
The initial data read is correct.
The data write does not happen despite what the log says.
The logged output from db.SaveChangesAsync() is this: -
Opened connection asynchronously at 19/03/21 13:09:21 +00:00
Started transaction at 19/03/21 13:09:22 +00:00
UPDATE [dbo].[TheTableObject] SET [TheFieldSlashProperty] = #0 WHERE ([Id] = #1)
-- #0: 'A short string' (Type = String, Size = 50)
-- #1: '1234567' (Type = Int32)
-- Executing asynchronously at 19/03/21 13:09:23 +00:00
-- Completed in 41 ms with result: 1
Committed transaction at 19/03/21 13:09:25 +00:00
Closed connection at 19/03/21 13:09:25 +00:00

Devart ChangeConflictException but values still written to database

I have an intermittent Devart.Data.Linq.ChangeConflictException: Row not found or changed raising it's ugly head. The funny thing is, the change is still written to the database!
The stack trace says:
Devart.Data.Linq.ChangeConflictException: Row not found or changed.
at Devart.Data.Linq.Engine.b4.a(IObjectEntry[] A_0, ConflictMode A_1, a A_2)
at Devart.Data.Linq.Engine.b4.a(ConflictMode A_0)
at Devart.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at Devart.Data.Linq.DataContext.SubmitChanges()
at Billing.Eway.EwayInternal.SuccessCustomerRenewal(String username, Bill bill, EwayTransaction transaction) in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\EwayInternal.cs:line 552
at Billing.Eway.Eway.BillAllUsers() in c:\Users\Ian\Source\Repos\billing-class-library\Billing\Billing\Eway\Eway.cs:line 138
And my code for Billing.Eway.EwayInternal.SuccessCustomerRenewal:
internal static void SuccessCustomerRenewal(string username, Bill bill, EwayTransaction transaction)
{
// Give them their points!
ApplyBillToCustomerAccount(username, bill, true);
BillingEmail.SendRenewalSuccessEmail(username, bill, transaction);
using (MsSqlDataClassesDataContext msSqlDb = new MsSqlDataClassesDataContext())
{
// TODO: Remove this logging
msSqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MsSQL.txt", true) { AutoFlush = true };
EwayCustomer ewayCustomer = msSqlDb.EwayCustomers.First(c => c.Username == username);
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
using (MySqlDataContext mySqlDb = new MySqlDataContext())
{
// TODO: Remove this logging
mySqlDb.Log = new StreamWriter(#"logs\db\" + Common.GetCurrentTimeStamp() + "-MySQL.txt", true) { AutoFlush = true };
BillingMySqlContext.Customer grasCustomer = mySqlDb.Customers.First(c => c.Username == username);
// Extend their membership date out so that the plan doesn't expire because of a failed credit card charge.
grasCustomer.MembershipDate =
ewayCustomer.NextBillingDate.AddDays(1);
mySqlDb.SubmitChanges(); // <-- This is line 552
}
msSqlDb.SubmitChanges();
}
}
I know that the issue occurs on the mySqlDb.SubmitChanges() line, since that DB context is the one using Devart (Linq solution for MySQL databases): the other context uses pure MS Linq.
Not only is the change written to the MySql DB (inner using block), but it is also written to the MsSql DB (outer using block). But that's where the magical success ends.
If I could I would write a Minimal, Complete and Verifiable example, but strangely I'm unable to generate a Devart ChangeConflictException.
So, why does the change get saved to the database after a Devart.Data.Linq.ChangeConflictException? When I previously encountered System.Data.Linq.ChangeConflictException changes weren't saved.
Edit 1:
I've also now included the .PDB file and gotten line number confirmation of the exact source of the exception.
Edit 2:
I now understand why I can't generate a ChangeConflictException, so how is it happening here?
These are the attributes for MembershipDate:_
[Column(Name = #"Membership_Date", Storage = "_MembershipDate", CanBeNull = false, DbType = "DATETIME NOT NULL", UpdateCheck = UpdateCheck.Never)]
I know I can explicitly force my changes through to override any potential conflict, but that seems undesirable (I don't know what I would be overriding!). Similarly I could wrap the submit in a try block, and retry (re-reading each time) until success, but that seems clunky. How should I deal with this intermittent issue?
Edit 3:
It's not caused by multiple calls. This function is called in one place, by a single-instance app. It creates log entries every time it is run, and they are only getting created once. I have since moved the email call to the top of the method: the email only gets sent once, the exception occurs, and database changes are still made.
I believe it has something to do with the using blocks. Whilst stepping through the debugger on an unrelated issue, I entered the using block, but stopped execution before the SubmitChanges() call. And the changes were still written to the database. My understanding was that using blocks were to ensure resources were cleaned up (connections closed, etc), but it seems that the entire block is being executed. A new avenue to research...
But it still doesn't answer how a ChangeConflictException is even possible given Devart explicitly ignores them.
Edit 4:
So I wasn't going crazy, the database change did get submitted even after I ended execution in the middle of the using block, but it only works for websites.
Edit 5:
As per #Evk's suggestion I've included some DB logging (and updated the stacktrace and code snippet above). The incidence rate of this exception seems to have dropped, as it has only just happened since I implemented the logging. Here are the additional details:
Outer (MS SQL) logfile:
SELECT TOP (1) [t0].[id], [t0].[Username], [t0].[TokenId], [t0].[PlanId], [t0].[SignupDate], [t0].[NextBillingDate], [t0].[PaymentType], [t0].[RetryCount], [t0].[AccountStatus], [t0].[CancelDate]
FROM [dbo].[EwayCustomer] AS [t0]
WHERE [t0].[Username] = #p0
-- #p0: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [dyonis]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.18408a
(It just shows the SELECT call (.First()), none of the updates show).
Inner (MySQL) logfile:
SELECT t1.Customer_ID, t1.Username, t1.Account_Group, t1.Account_Password, t1.First_Name, t1.Last_Name, t1.Account_Type, t1.Points, t1.PromoPoints, t1.Phone, t1.Cell, t1.Email, t1.Address1, t1.Address2, t1.City, t1.State, t1.Country, t1.Postcode, t1.Membership_Group, t1.Suspend_On_Zero_Points, t1.Yahoo_ID, t1.MSN_ID, t1.Skype_ID, t1.Repurchase_Thresh, t1.Active, t1.Delete_Account, t1.Last_Activity, t1.Membership_Expires_After_x_Days, t1.Membership_Date, t1.auth_name, t1.created_by, t1.created_on, t1.AccountGroup_Points_Used, t1.AccountGroup_Points_Threashold, t1.LegacyPoints, t1.Can_Make_Reservation, t1.Gallery_Access, t1.Blog_Access, t1.Private_FTP, t1.Photometrica, t1.Promo_Code, t1.Promo_Expire_DTime, t1.Gift_FirstName, t1.Gift_LastName, t1.Gift_Email, t1.Gift_Phone, t1.Gift_Active, t1.NoMarketingEmail, t1.Can_Schedule, t1.Refered_By, t1.Q1_Hear_About_Us, t1.Q2_Exp_Level, t1.Q3_Intrests, t1.GIS_DTime_UTC, t1.Membership_Expire_Notice_Sent, t1.Promo_Expire_Notice_Sent, t1.isEncrypted, t1.PlanId
FROM grasbill.customers t1
WHERE t1.Username = :p0 LIMIT 1
-- p0: Input VarChar (Size = 6; DbType = AnsiString) [dyonis]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
UPDATE grasbill.customers SET Membership_Date = :p1 WHERE Customer_ID = :key1
-- p1: Input DateTime (Size = 0; DbType = DateTime) [8/3/2016 4:42:53 AM]
-- key1: Input Int (Size = 0; DbType = Int32) [7731]
-- Context: Devart.Data.MySql.Linq.Provider.MySqlDataProvider Mapping: AttributeMappingSource Build: 4.4.519.0
(Shows the SELECT and UPDATE calls)
So the log files don't really give any clue as to what's happening, but again the MS SQL database has been updated! The NextBillingDate field has been set correctly, as per this line:
ewayCustomer.NextBillingDate = Common.GetPlanExpiry(bill.BillPlan);
If it hadn't been updated, the user would have been billed again on the next timer tick (5 mins later), and I can see from logging that didn't happen.
One other interesting thing to note is the log file timestamps. As you can see from the code above I grab the current (UTC) time for the log filename. Here is the information shown by Windows File Explorer:
The MS SQL logfile was created at 04:42 (UTC) and last modified at 14:42 (UTC+10, Windows local-time), but the MySQL logfile was last modified at 15:23 (UTC+10), 41 minutes after it was created. Now I assume the logfile StreamWriter is closed as soon as it leaves scope. Is this delay an expected side effect of the exception? Did it take 41 minutes for the garbage collector to realise I no longer needed a reference to the StreamWriter? Or is something else going on?
Well 6 months later I finally got to the bottom of this problem. Not sure if it will ever help anyone else, but I'll detail it anyway.
There were 2 problems in play here, and 1 of them was idiocy (as they usually are), but one was legitimately something I did not know or expect.
Problem 1
The reason the changes were magically made to the database even though there was an exception was because the very first line of code in that function ApplyBillToCustomerAccount(username, bill, true); updates the database! <facepalm>
Problem 2
The (Devart) ChangeConflictException isn't only thrown if the data has changed, but also if you're not making any changes. MS SQL stores DateTimes with great precision, but MySQL (or the one I'm running at least) only stores down to seconds. And here's where the intermittency came in. If my database calls were quick enough, or just near the second boundary, they both got rounded to the same time. Devart saw no changes to be written, and threw a ChangeConflictException.
I recently made some optimisations to the database which resulted in far greater responsiveness, and massively increased incidence of this exception. That was one of the clues.
Also I tried changing the Found Rows parameter to true as instructed in the linked Devart post but found it did not help in my case. Or perhaps I did it wrong. Either way now that I've found the source of the issue I can eliminate the duplicate database updates.

Cannot create a capped collection larger than 500 Megabytes

I'm using a Mongo db on 32bit system and I need to create a large capped collection with a max size of 1GB. Everything works fine on 64bit system, but on 32bit I'm getting the error:
com.mongodb.CommandResult$CommandFailure: command failed [command failed [create] {
"serverUsed" : "localhost:27017" ,
"errmsg" : "exception: assertion db\\pdfile.cpp:437" ,
"code" : 0 ,
"ok" : 0.0}
The total storage size for the server is 2GB on 32bit system, but even with this size I can't create a collection larger than 500MB. What does this magic number mean?
Mongo db server version is 2.0.6
Additional info:
I have a couple of database files, the total size of which is 34MB. Before running a mongo db, I'm copying those files into the 'data' directory, starting Mongo db and then in shell I see the same number for the totat size - 35651584 (34MB) (the command used is taken from the comments below). If I try to create a collection of size 500MB I see a new file added (512MB). But if for example I will try to create a collection of size 600MB, I have an error discribed above (but the 512MB file still added).
The Mongo db server log
The Mongo db is started with the command line options:
> db.adminCommand("getCmdLineOpts")
{
"argv" : [
"mongod.exe",
"--dbpath",
"..\\data",
"-vvvvvv",
"--logpath",
"..\\log\\server.log"
],
"parsed" : {
"dbpath" : "..\\data",
"logpath" : "..\\log\\server.log",
"vvvvvv" : true
},
"ok" : 1
}
>
MongoDB runs much better on a 64-bit system, can you change to x64? As Stennie said you're must likely hitting a mmap limit due to other data in your database.
Can you test this hypothesis by connecting with the mongo shell and trying to create a new by running a new collection that is 1 byte larger than 512 MB -
db.createCollection("mycoll6", {capped:true, size:536870913})
You should hopefully get the following error message -
"errmsg" : "exception: can't map file memory - mongo requires 64 bit build for larger datasets",
In the Mongo shell, connect to the admin database and view the size of your database to see how much data you have -
use admin
show dbs
Update: based on some additional testing (I used Ubuntu 12.04 32-bit), this seems like it could be a bug.
Ubuntu Testing
db.createCollection("my13", {capped:true, size:536608768})
{
"errmsg" : "exception: assertion db/pdfile.cpp:437",
"code" : 0,
"ok" : 0
}
db.createCollection("my13", {capped:true, size:536608767})
{ "ok" : 1 }`
536608767 bytes is a little under 512 MB, leaving room for some sort of header in the file.
I thought it was maybe related to [smallfiles][2] as all 32-bit installs run with that option, however, an x64 build with the smallfiles does not display the same symptoms.
I have logged SERVER-6722 for this issue.

How do I update a change using SubSonic and MySQL

I was trying to TDD using SubSonic 2.1, MySQL and C# and when I got to testing updating an existing record using the code below,
public void Test2()
{
User.Insert("jmarcus1", "jmarcus1", "jackass", "marcus", 3, false);
User users = new User();
int ulevel = 1;
User.Update(2, "jmarcus1", "jmarcus1", "jackass", "marcus", ulevel, false);
Assert.AreEqual(1, users.Ulevel);
}
with the following
------ Test started: Assembly: SalMan.dll ------
Starting the MbUnit Test Execution
Exploring SalMan, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
MbUnit 2.4.2.130 Addin
Found 3 tests
[success] TestFixture1.Test
[success] TestFixture1.Test1
[failure] TestFixture1.Test2
TestCase 'TestFixture1.Test2'
failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE `userid` = 1; SELECT 1 AS id' at line 1
MySql.Data.MySqlClient.MySqlException
Message: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE `userid` = 1; SELECT 1 AS id' at line 1
Source: MySql.Data
StackTrace:
at MySql.Data.MySqlClient.MySqlStream.OpenPacket()
at MySql.Data.MySqlClient.NativeDriver.ReadResult(UInt64& affectedRows, Int64& lastInsertId)
at MySql.Data.MySqlClient.MySqlDataReader.GetResultSet()
at MySql.Data.MySqlClient.MySqlDataReader.NextResult()
at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
at MySql.Data.MySqlClient.MySqlCommand.ExecuteScalar()
C:\svn\subsonicproject\trunk\SubSonic\DataProviders\MySqlDataProvider.cs(280,0): at SubSonic.MySqlDataProvider.ExecuteScalar(QueryCommand qry)
C:\svn\subsonicproject\trunk\SubSonic\DataProviders\DataService.cs(533,0): at SubSonic.DataService.ExecuteScalar(QueryCommand cmd)
C:\svn\subsonicproject\trunk\SubSonic\ActiveRecord\ActiveRecord.cs(182,0): at SubSonic.ActiveRecord`1.Save(String userName)
D:\My Documents\Visual Studio 2008\Projects\SalMan\SalMan\Generated\User.cs(352,0): at Salman.User.Update(Int32 varUserid, String varUsername, String varPassword, String varFname, String varLname, Int32 varUlevel, Boolean varStatus)
D:\My Documents\Visual Studio 2008\Projects\SalMan\SalMan\Tests\TestFixture1.cs(40,0): at salman.TestFixture1.Test2()
[reports] generating HTML report
TestResults: file:///C:/Documents%20and%20Settings/*****************/Application%20Data/MbUnit/Reports/SalMan.Tests.html
2 passed, 1 failed, 0 skipped, took 7.66 seconds.
Does anyone have a workaround for this?
It looks to me like your args are out of order. The PK should be the first thing - are you sure they are lining up correctly?
Are you using ActiveRecord? If so - have you tried this:
User u=new User();
u.UserName("jmarcus1");
u.Name="jackass"
u.Ulevel=1;
...
u.Save();
User u=new User("jmarcus1");
u.UILevel=2;
u.Save();
apparently the solution was, following Robs excellent advice above, to
User u = User.FetchByID(2);
u.Ulevel = ulevel;
u.save();
i was not too sure that the snippet above would work for updates until you specify
the record you wish to update.
my question to Rob however is, why does "u.Update(2, "jmarcus",..." give the error
above?

Categories