ChangeConflictException when submitting LINQ entity with XML field - c#

I have a business object type representing customers which, when its .Save() method is called, attempts to retrieve (using LINQ) a matching entity from the database based on the object's ID property. If it does not find a matching entity, it creates a new one; if it does find a matching entity it updates that entity's properties and calls [my datacontext].SubmitChanges().
That last part is the problem. Much of the data for a user is stored in an XML field in the database, named content_xml. There is a bug in the code which is failing to retrieve two of those data items ("coordinates" and "sales_groups") when constructing the business object, and so when the .Save() method goes to update the entity, the XML it's sending is missing those elements.
For some reason this is throwing a ChangeConflictException, stating that "1 of 12" updates failed. In order to identify what was causing the problem, I used the code from http://msdn.microsoft.com/en-us/library/Bb386985%28v=vs.100%29.aspx to extract information about the change conflict.
From that, I see that [my datacontext].ChangeConflicts contains a single ObjectChangeConflict, which itself contains a single MemberChangeConflict representing the XML field.
The part I do not understand is that when checking the values of currVal, origVal and databaseVal, I see that XML data held in currVal is what I would expect based on the bug (it's missing the two elements), and the XML for origVal and databaseVal are identical. That shouldn't cause a conflict, should it?
Even more strange is that when I manually correct the bug by providing the correct (according to the database) values for the missing elements, just before the .SubmitChanges() call, it still causes a conflict, even though the XML from all three (currVal, origVal and databaseVal) all now look identical.
Can anyone suggest what might be causing the conflict?
Edit:
OK, this is a bit of a surprise, but even if I never set the content_xml property value of the retrieved entity before submitting changes, I still get a conflict on the XML field.

I would guess that the change conflict stems from L2S comparing old vs new value in an incorrect way for your xml field.
A possible workaround for this is to add a timestamp or rowversion column to the table and updating the L2S model. If a table contain rowversion/timestamp, only that column will be used for detecting change conflicts...

Related

Model validation failing for valid form submission

I have a table named EFT_BANK_INFO. Due to reasons I won't get into, I had to split up the form into 2 separate forms. This means that half the fields for this table are edited from one view, and half from another.
Everything was working great, until I added form validation to the tables model .cs file. While the codes syntax is correct, all submissions from both forms are labeled as being invalid, preventing me from updating, deleting, and adding rows to the table. It is marked as invalid due to the separation; i.e; I have Required fields in the model for the second portion of the fields, so when I submit data for one half of the fields, the other fields not part of the view/submission are marked as being invalid because no data was received for them.
A potential workaround would be to artificially satisfy the validation for the unwanted fields by inserting values for those fields in the C# controller during the create process. I could also do this for the edit process.
I was wondering if there's a better alternative, given I must have it structured this way. I'd like it to only validate the relevant fields in the model, and not validate all of them for the submission, which is causing the error. No code is really necessary since I have no bugs and know what is wrong, this is more of a theory/solution identification problem. Thanks.
Use ViewModels.
A view model represents only the data that you want to display on your view/page, whether it be used for static text or for input values (like textboxes and dropdowns).
See the accepted answer on the question linked above.

in MongoDB how do I update a list of key/value pairs c#

In MongoDB, accessing from C# driver:
I want to keep a list of keys (ints are fine), that have a current value. (Dictionary<int,int>) works well for the concept)
I need to have multiple (10+) machines setting values in this Document. Multiple threads on each machine.
Using C# and the MongoDB driver I need to:
If the key exists, increment the value for that key.
If it does not exist, I need to Add it, and set the value to 1.
Critically Important:
It can't overwrite values others are writing (I.E. No get doc and call Save() to save it)
It has to handle adding values that don't already exist gracefully.
I might be able to have a query that inserts a new document with all of the keys set to values of 0 - if this would help, but it won't be easy, so that is not a preferred answer.
I've tried using a Dictionary, and can't seem to figure out how to update it without it creating:
null,
{ v=1}
On my insert (which doesn't include the k=, and has the null, that I don't want. and causes deserialization to blow)
I don't care what method of serialization is used for the dictionary, and am open to any other method of storage.
Any ideas?
My best guess so far is to keep a list of keys that is separate from the values (two List and append the key to the key list if it isn't found, requery the list and use the first one as the index in the 2nd list. (This seems like it might have concurrency issues that could be hard to track down.)
I would prefer the Linq syntax, but am open to using the .Set (string,value) syntax if that makes things work.

How does Raven know what collection to include?

I am looking at the following sample code to include referenced documents and avoid round trip.
var order = session.Query<Order>()
.Customize(x => x.Include<Order>(o=>o.CustomerId)) // Load also the costumer
.First();
var customer = session.Load<Customer>(order.CustomerId);
My question is how does Raven know that this o=>o.CustomerId implies Customer document/collection? At no time was the entity Customer supplied in the query to get the Order entity. Yet Raven claims that the 2nd query to get Customer can be done against the cache, w/o any network trip.
If it's by naming convention, which seems like a very poor/fragile/brittle convention to adopt, what happens when I need to include more than 1 documents?
Eg. a car was purchased under 2 names, so I want to link back to 2 customers, the primary and secondary customer/driver. They're both stored in the Customer collection.
var sale = session.Query<Sale>()
.Customize(x => x.Include<Sale>(o=>o.PrimaryCustomerId).Include<Sale>(o=>o.SecondaryCustomerId)) // Load also the costumer
.First();
var primaryCustomer = session.Load<Customer>(order.PrimaryCustomerId);
var secondaryCustomer = session.Load<Customer>(order.SecondaryCustomerId);
How can I do the above in 1 network trip? How would Raven even knows that this o=>o.PrimaryCustomerId and o=>o.SecondaryCustomerId are references to the one and same table Customer since obviously the property name and collection name don't line up?
Raven doesn't have the concept of "tables". It does know about "collections", but they are just a convenience mechanism. Behind the scenes, all documents are stored in one big database. The only thing that makes a "collection" is that each document has a Raven-Entity-Name metadata value.
Both the examples you showed will result in one round trip (each). Your code looks just fine to me.
My question is how does Raven know that this o=>o.CustomerId implies Customer document/collection? At no time was the entity Customer supplied in the query to get the Order entity.
It doesn't need to be supplied in the query. As long as the data stored in the CustomerId field of the Sale document is a full document key, then that document will be returned to the client and loaded into session.
Yet Raven claims that the 2nd query to get Customer can be done against the cache, w/o any network trip.
That's correct. The session container tracks all documents returned - not just the ones from the query results. So later when you call session.Load using the same document key, it already has it in session so it doesn't need to go back to the server.
Regardless of whether you query, load, or include - the document doesn't get deserialized into a static type until you pull it out of the session. That's why you specify the Customer type in the session.Load<Customer> call.
If it's by naming convention, which seems like a very poor/fragile/brittle convention to adopt ...
Nope, it's by the value stored in the property which is a document key such as "customers/123". Every document is addressable by its document key, with or without knowing the static type of the class.
what happens when I need to include more than 1 documents?
The exact same thing. There isn't a limit on how many documents can be included or loaded into session. However, you should be sure to open the session in a using statement so it is disposed properly. The session is a "Unit of Work container".
How would Raven even knows that this o=>o.PrimaryCustomerId and o=>o.SecondaryCustomerId are references to the one and same table Customer since obviously the property name and collection name don't line up?
Again, it doesn't matter what the names of the fields are. It matters that the data in those fields contains a document id, such as "customers/123". If you aren't storing the full string identifier, then you will need to build the document key inside the lambda expression. In other words, if Sale.CustomerId contains just the number 123, then you would need to include it with .Include<Sale>(o=> "customers/" + o.CustomerId).

Strategies for modeling large (50~) number of properties

Scenario
I'm parsing emails and inserting them a database using an ORM (NHibernate to be exact). While my current approach does technically work I'm not very fond of it but can't of a better solution. The email contains 50~ fields and is sent from a third party and looks like this (obviously a very short dummy sample).
Field #1: Value 1 Field #2: Value 2
Field #3: Value 3 Field #4: Value 4 Field #5: Value 5
Problem
My problem is that with parsing this many fields the database table is an absolute monster. I can't create proper models employing any kind of relationships either AFAIK because each email sent is all static data and doesn't rely on any other sources.
The only idea I have is to find commonalities between each field and split them into more manageable chunks. Say 10~ fields per entity, so 5 entities total. However, I'm not terribly in love with that idea either seeing as all I'd be doing is create one-to-one relationships.
What is a good way of managing large number of properties that are out of your control?
Any thoughts?
Create 2 tables: 1 for the main object, and the other for the fields. That way you can programatically access each field as necessary, and the object model doesn't look to nasty.
But this is just off the top of my head; you have a weird problem.
If the data is coming back in a file that you can parse easily, then you might be able to get away with creating a command line application that will produce scripts and c# that you can then execute and copy, paste into your program. I've done that when creating properties out of tables from html pages (Like this one I had to do recently)
If the 50 properties are actually unique and discrete pieces of data regarding this one entity, I don't see a problem with having those 50 properties (even though that sounds like a lot) on one object. For example, the Type class has a large number of boolean properties relating to it's data (IsPublic, etc).
Alternatives:
Well, one option that comes to mind immediately is using dynamic object and overriding TryGetMember to lookup the 'property' name as a key in a dictionary of key value pairs (where your real set up of 50 key value pairs exists). Of course, figuring out how to map that from your ORM into your entity is the other problem and you'd lose intellisense support.
However, just throwing the idea out there.
Use a dictionary instead of separate fields. In the database, you just have a table for the field name and its value (and what object it belongs to).

How can I use validation to enforce the uniqueness of a property in ASP.NET MVC 2?

Imagine an object with a field that can't have a duplicate value in the database. My first instinct was to create a unique attribute that I could apply as a data annotation to a property. This unique attribute would hit the database and check if the value already exists. This would work when executing a create method, but would fail on an update. On an update, I would get a duplicate value error for every unique field of my entity whose value I don't want to change. What would be a good way, or an established practice, to accomplish this on ASP.NET MVC 2 in a way that fits nicely with the ModelState? Passing the id of my object to the attribute validator could work by checking if the duplicate value that is found is of the same entity that I am updating but I don't know how to get that data from inside of the validator.
Please forgive me if this is a stupid question or if it is phrased incoherently. It's almost 3 in the morning and I've been coding since the morning of yesterday.
For this kind of validation, I would let the database do what it already does so well. Make sure your database has the unique constraint and let it report back an error if you violate it. You can then add the error to the model errors (with a nice friendly bit of text, rather than just plonking the SQL error).
If you are determined to perform a check yourself, you can get around the UPDATE problem by excluding the current record...
SELECT COUNT(*)
FROM myTable
WHERE myTable.UniqueValue = 'ShouldBeUnique'
AND myTable.Id <> 5
In this example, you use the id of the record you are updating to avoid checking it, which means you just check other records to see if they contain the unique value.

Categories