I'm writing C# code that writes to a Mongo database used by an existing Web app (written in PHP), so I need to not change the existing structure of the database. The database structure looks something like this:
{
"_id": ObjectId("5572ee670e86b8ec0ed82c61")
"name": "John Q. Example",
"guid": "12345678-1234-5678-abcd-fedcba654321",
"recordIsDeleted": false,
"address":
{
"line1": "123 Main St.",
"city": "Exampleville"
}
}
I read that in to a class that looks like this:
public class Person : MongoMappedBase
{
public ObjectId Id { get; set; }
public Guid Guid { get; set; }
public bool RecordIsDeleted { get; set; }
public string Name { get; set; }
public AddressData Address { get; set; }
// etc.
}
public class AddressData : MongoMappedBase
{
public string Line1 { get; set; }
public string City { get; set; }
// etc.
}
The reading code looks like:
var collection = db.GetCollection<Person>("people");
List<Person> people = collection.Find<Person>(_ => true).ToListAsync().Result;
(Note: I'm still in development. In production, I'm going to switch to ToCursorAsync() and loop through the data one at a time, so don't worry about the fact that I'm pulling the whole list into memory.)
So far, so good.
However, when I write the data out, this is what it looks like:
{
"_id": ObjectId("5572ee670e86b8ec0ed82c61")
"name": "John Q. Example",
"guid": "12345678-1234-5678-abcd-fedcba654321",
"recordIsDeleted": false,
"address":
{
"_t": "MyApp.MyNamespace.AddressData, MyApp",
"_v":
{
"line1": "123 Main St.",
"city": "Exampleville"
}
}
}
Notice how the address field looks different. That's not what I want. I want the address data to look just like the address data input (no _t or _v fields). In other words, the part that ended up as the contents of _v is what I wanted to persist to the Mongo database as the value of the address field.
Now, if I was just consuming the Mongo database from my own C# code, this would probably be fine: if I were to deserialize this data structure, I assume (though I haven't yet verified) that Mongo would use the _t and _v fields to create instances of the right type (AddressData), and put them in the Address property of my Person instances. In which case, everything would be fine.
But I'm sharing this database with a PHP web app that is not expecting to see those _t and _v values in the address data, and won't know what to do with them. I need to tell Mongo "Please do not serialize the type of the Address property. Just assume that it's always going to be an AddressData instance, and just serialize its contents without any discriminators."
The code I'm currently using to persist the objects to Mongo looks like this:
public UpdateDefinition<TDocument> BuildUpdate<TDocument>(TDocument doc) {
var builder = Builders<TDocument>.Update;
UpdateDefinition<TDocument> update = null;
foreach (PropertyInfo prop in typeof(TDocument).GetProperties())
{
if (prop.PropertyType == typeof(MongoDB.Bson.ObjectId))
continue; // Mongo doesn't allow changing Mongo IDs
if (prop.GetValue(doc) == null)
continue; // If we didn't set a value, don't change existing one
if (update == null)
update = builder.Set(prop.Name, prop.GetValue(doc));
else
update = update.Set(prop.Name, prop.GetValue(doc));
}
return update;
}
public void WritePerson(Person person) {
var update = BuildUpdate<Person>(person);
var filter = Builders<Person>.Filter.Eq(
"guid", person.Guid.ToString()
);
var collection = db.GetCollection<Person>("people");
var updateResult = collection.FindOneAndUpdateAsync(
filter, update
).Result;
}
Somewhere in there, I need to tell Mongo "I don't care about the _t field on the Address property, and I don't even want to see it. I know what type of objects I'm persisting into this field, and they'll always be the same." But I haven't yet found anything in the Mongo documentation to tell me how to do that. Any suggestions?
I figured it out. I was indeed having the problem described at https://groups.google.com/forum/#!topic/mongodb-user/QGctV4Hbipk where Mongo expects a base type but is given a derived type. The base type Mongo was expecting, given my code above, was actually object! I discovered that builder.Set() is actually a generic method, builder.Set<TField>, which can figure out its TField type parameter from the type of its second argument (the field data). Since I was using prop.GetValue(), which returns object, Mongo was expecting an object instance on my Address field (and the other fields that I left out of the question) and therefore putting _t on all those fields.
The answer was to explicitly cast the objects being returned from prop.GetValue(), so that builder.Set() could call the correct generic method (builder.Set<AddressData>() rather than builder.Set<object>()) in this case. The following was a bit ugly (I wish there was a way to get a specific generic function overload by reflection at runtime, as I could have converted that whole switch statement to a single reflection-based method call), but it worked:
public UpdateDefinition<TDocument> BuildUpdate<TDocument>(TDocument doc) {
var builder = Builders<TDocument>.Update;
var updates = new List<UpdateDefinition<TDocument>>();
foreach (PropertyInfo prop in typeof(TDocument).GetProperties())
{
if (prop.PropertyType == typeof(MongoDB.Bson.ObjectId))
continue; // Mongo doesn't allow changing Mongo IDs
if (prop.GetValue(doc) == null)
continue; // If we didn't set a value, don't change existing one
switch (prop.PropertyType.Name) {
case "AddressData":
updates.add(builder.Set(prop.Name, (AddressData)prop.GetValue(doc)));
break;
// Etc., etc. Many other type names here
default:
updates.add(builder.Set(prop.Name, prop.GetValue(doc)));
break;
}
}
return builder.Combine(updates);
}
This resulted in the Address field, and all the other fields I was having trouble with in my real code, being persisted without any _t or _v fields, just like I wanted.
Thanks #rmunn for this question, it helped me a lot.
I was struggling with this same problem when I found this Q&A. After further digging I found that you can remove the switch statement in the accepted answer by using BsonDocumentWrapper.Create(). This is a link to where I found the tip.
Here's a example for anyone else looking:
public UpdateDefinition<TDocument> BuildUpdate<TDocument>(TDocument doc) {
var builder = Builders<TDocument>.Update;
var updates = new List<UpdateDefinition<TDocument>>();
foreach (PropertyInfo prop in typeof(TDocument).GetProperties())
{
if (prop.PropertyType == typeof(MongoDB.Bson.ObjectId))
continue; // Mongo doesn't allow changing Mongo IDs
if (prop.GetValue(doc) == null)
continue; // If we didn't set a value, don't change existing one
updates.add(builder.Set(prop.Name, BsonDocumentWrapper.Create(prop.PropertyType, prop.GetValue(doc))));
}
return builder.Combine(updates);
}
You can convert your object to JSON string and from that JSON string you can convert back to BsonArray (if list) or BsonDocument (if object)
Object that you want to update
public UpdateDefinition<T> getUpdate(T t)
{
PropertyInfo[] props = typeof(T).GetProperties();
UpdateDefinition<T> update = null;
foreach (PropertyInfo prop in props)
{
if (t.GetType().GetProperty(prop.Name).PropertyType.Name == "List`1")
{
update = Builders<T>.Update.Set(prop.Name, BsonSerializer.Deserialize<BsonArray>(JsonConvert.SerializeObject(t.GetType().GetProperty(prop.Name).GetValue(t))));
}
else if (t.GetType().GetProperty(prop.Name).PropertyType.Name == "object")
{
/* if its object */
update = Builders<T>.Update.Set(prop.Name, BsonSerializer.Deserialize<BsonDocument>(JsonConvert.SerializeObject(t.GetType().GetProperty(prop.Name).GetValue(t))));
}
else
{
/*if its primitive data type */
update = Builders<T>.Update.Set(prop.Name, t.GetType().GetProperty(prop.Name).GetValue(t));
}
}
return update;
}
This will update any type of object list, you just need to pass the object
Related
Given a class something like this:
public class MyClass : ValidationValues
{
public string Foo { get; set; }
[Required(ErrorMessage = "Bar is required.")]
public string Bar { get; set; }
// and many more
}
public class ValidationValues
{
public bool IsValid { get; set; } = true;
public string InvalidReason { get; set; }
}
I need to determine if a property is required while looping over it as a generic list. By looking into the Watch, I've figured out one way, but it feels clunky, and I'm thinking it should be simpler.
For some context, this logic is inside of an Azure Function. So no Views, no MVC, etc. The function is a Blob Storage trigger that picks up a .CSV file with a | delimited list which gets deserialized into a List<MyClass>. We do not want to enforce the Required attributes at deserialization because we want more granular control.
So given a file like this:
value1 | |
value2 | something
What eventually gets sent back to the user is something like this:
[
{
"foo": "value1",
"bar": "",
"isValid": false,
"InvalidReason" : "Bar is required"
},
{
"foo": "value2",
"bar": "something",
"isValid": true,
"InvalidReason" : ""
}
]
Here's what I have now:
foreach (T item in itemList) // where 'itemList' is a List<T> and in this case T is MyClass
{
foreach (PropertyInfo property in item.GetType().GetProperties())
{
if (property.CustomAttributes.ToList()[0].AttributeType.Name == "RequiredAttribute")
{
// validate, log, populate ValidationValues
}
}
}
This is the part I don't like:
property.CustomAttributes.ToList()[0].AttributeType.Name == "RequiredAttribute"
Sometimes when I figure out a coding challenge, I tell myself, "This is the way". But in this case, I'm pretty sure this isn't the way.
You can rewrite that line using GetCustomAttibute:
using System.Reflection;
foreach (T item in itemList) // where 'itemList' is a List<T> and in this case T is MyClass
{
foreach (PropertyInfo property in item.GetType().GetProperties())
{
var attribute = property.GetCustomAttibute<RequiredAttribute>();
}
}
Reflection is slow - or at least, relatively slow. So; the main important thing here is: don't do this per instance; you could either cache it per Type (from GetType(), or you could just use T and never even check .GetType() per instance, depending on your intent. This includes caching the properties that exist for a given type, and which are required. However, the real bonus points are to use meta-programming to emit - either at runtime, or at build-time via a "generator" - a method that *does exactly what you want, without any loops, tests, etc; i.e. in this case it might emit a method that does the equivalent of
void ValidateMyClass(MyClass obj)
{
if (string.IsNullOrWhitespace(obj.Bar))
{
DoSomething("Bar is required.");
}
}
This can be done in a variety of ways, including the Expression API, the emit API (ILGenerator), emitting C# and using CSharpCodeProvider, or the "generators" API.
I am getting JSON data from a webservice. it is providing me with FORM DATA with different questions and answers. every answer is a different c# object. I am trying to find the best way to map the ANSWERS to correct c# object.
for example if Question Id is "37" Then its a Address Object.
I have JSON String like in this format below
"answers": {
"37": {
"name": "yourAddress37",
"order": "6",
"sublabels": "{\"cc_firstName\":\"First Name\",\"cc_lastName\":\"Last Name\",\"cc_number\":\"Credit Card Number\",\"cc_ccv\":\"Security Code\",\"cc_exp_month\":\"Expiration Month\",\"cc_exp_year\":\"Expiration Year\",\"addr_line1\":\"Street Address\",\"addr_line2\":\"Street Address Line 2\",\"city\":\"City\",\"state\":\"State \\/ Province\",\"postal\":\"Postal \\/ Zip Code\",\"country\":\"Country\"}",
"text": "Your Home Address:",
"type": "control_address",
"answer": {
"addr_line1": "148 east 38st ",
"addr_line2": "",
"city": "Brooklyn ",
"state": "Ny",
"postal": "11203",
"country": ""
},
"prettyFormat": "Street Address: 148 east 38st <br>City: Brooklyn <br>State / Province: Ny<br>Postal / Zip Code: 11203<br>"
},
"38": {
"name": "emergencyContact",
"order": "9",
"sublabels": "{\"prefix\":\"Prefix\",\"first\":\"First Name\",\"middle\":\"Middle Name\",\"last\":\"Last Name\",\"suffix\":\"Suffix\"}",
"text": "Emergency Contact Name:",
"type": "control_fullname",
"answer": {
"first": "Pauline ",
"last": "Sandy "
},
"prettyFormat": "Pauline Sandy "
}
}
and it MAPS to following c# property
public Dictionary<int, answer> answers{ get; set; }
Then I have a generic Answer class
public class answer
{
public string name { get; set; }
public dynamic answer { get; set; }
}
if you look at the ANSWER data from json then you will see its different for every question. for example one answer would be ADDRESS OBJECT, other answer would be FIRST & LAST NAME object.
my question is, how can i deserialize json into correct objects/properties automatically? I can create different POCO objects, such as address & ProfileName, but how would i map them automatically to correct object/property.
EDIT:
Loop through all Answers
foreach (var a in item.answers)
{
// pass the ANSWER OBJECT (dynamic data type) to function
createNewApplication(System.Convert.ToInt16(a.Key), a.Value.answer,ref app);
}
private void createNewApplication(int key, dynamic value,ref HcsApplicant app)
{
if (key == 4) // data is plain string
app.yourPhone = value;
if (key == 8)
app.yourEmail = value;
if (key==37) // data is a object
app.address = value.ToObject<address>();
}
is this approach OK? any cleaner way of doing it?
I personally don't like every option that involves custom parsing and looking directly on the questions.
You can make use of partial deserialization via JToken class.
Just declare your answers dictionary as such:
public Dictionary<int, JToken> Answers{ get; set; }
And then whenever you need the address page you can simply do Answers[37].ToObject<Address>(). How you manage to call this method, depends upon the rest of your code, but you can embed it in properties, in a big switch, in multiple methods, one for each class. One option I like is to have a static From method in each deserializable class:
public class Address
{
public string Name { get; set; }
// all the othe properties
// ....
public static Address From(Dictionary<int, JToken> answers)
{
return answers?.TryGetValue(37, out var address) ?? false
? address?.ToObject<Address>()
: null;
}
}
// so you can just write:
var address = Address.From(answers);
As a side note, remember that the default deserialization settings for Json.Net are case insensitive, so you can deserialize the name property from JSON to a more idiomatic Name property on your POCOs.
Make a constructor for each answer type that constructs by parsing a JSON object string. Make all the answers implement an interface, e.g. IAnswer. Map all constructors (as functions) to the corresponding question IDs in a dictionary. Lastly, loop through the questions, call each constructor, and maybe put them in a new dictionary.
Example code:
interface IAnswer { };
public class ExampleAnswer : IAnswer
{
public ExampleAnswer(String JSONObject)
{
// Parse JSON here
}
}
delegate IAnswer AnswerConstructor(String JSONObject);
Dictionary<int, AnswerConstructor> Constructors = new Dictionary<int, AnswerConstructor>()
{
{1234, ((AnswerConstructor)(json => new ExampleAnswer(json)))}
// Add all answer types here
};
Dictionary<int, IAnswer> ParseAnswers(Dictionary<int, String> JSONObjects)
{
var result = new Dictionary<int, IAnswer>();
foreach (var pair in JSONObjects)
result.Add(pair.Key, Constructors[pair.Key](pair.Value));
return result;
}
Edit: Look at Matt's answer for some good options for how to parse JSON.
Edit2, In response to your edit: That looks like a good way of doing it! I think it's better than my answer, since you can keep all type information, unlike my method.
The only thing I see that you might want to change is using else if or switch instead of multiple ifs. This could increase performance if you have many answers.
You have a couple of options:
Deserialize into a dynamic object using the System.Web package as per this answer or the JSON.Net package as per this answer then use conditional checks/the null propagation operator to access a property.
Automatically deserialize down to the level where there are differences, then have code to manual deserialize the properties that are different into the correct POCO types on your parent Deserialized object.
Leverage one of the Serialization Callbacks provided by JSON.Net (OnDeserializing or OnDeserialized) to handle populating the different properties into the correct types as part of the deserialization pipeline.
With approaches 2 and 3 you could write a nicer helper method on your POCO that inspected the objects properties and returned a result which would be the type that was set (I would recommend returning an Enum) e.g.:
public PropertyTypeEnum GetPropertyType(MyPocoClass myPocoClass)
{
if (myPocoClass.PropertyOne != null)
{
return PropertyTypeEnum.TypeOne;
}
else if (...)
{
return PropertyTypeEnum.TypeN
}
else
{
// probably throw a NotImplementedException here depending on your requirements
}
}
Then in your code to use the object you can use the returned Enum to switch on the logical paths of your code.
We are using ASP.NET Web API 2 and want to expose ability to partially edit some object in the following fashion:
HTTP PATCH /customers/1
{
"firstName": "John",
"lastName": null
}
... to set firstName to "John" and lastName to null.
HTTP PATCH /customers/1
{
"firstName": "John"
}
... in order just to update firstName to "John" and do not touch lastName at all. Suppose we have a lot of properties that we want to update with such semantic.
This is quite convenient behavior that is exercised by OData for instance.
The problem is that default JSON serializer will just come up with null in both cases, so it's impossible to distinguish.
I'm looking for some way to annotate model with some kind of wrappers (with value and flag set/unset inside) that would allow to see this difference. Any existing solutions for this?
I know that answers which are already given cover all aspects already, but just want to share concise summary of what we ended up doing and what seems to work for us pretty well.
Created a generic data contract
[DataContract]
public class RQFieldPatch<T>
{
[DataMember(Name = "value")]
public T Value { get; set; }
}
Created ad-hoc data cotnracts for patch requests
Sample is below.
[DataContract]
public class PatchSomethingRequest
{
[DataMember(Name = "prop1")]
public RQFieldPatch<EnumTypeHere> Prop1 { get; set; }
[DataMember(Name = "prop2")]
public RQFieldPatch<ComplexTypeContractHere> Prop2 { get; set; }
[DataMember(Name = "prop3")]
public RQFieldPatch<string> Prop3 { get; set; }
[DataMember(Name = "prop4")]
public RQFieldPatch<int> Prop4 { get; set; }
[DataMember(Name = "prop5")]
public RQFieldPatch<int?> Prop5 { get; set; }
}
Business Logic
Simple.
if (request.Prop1 != null)
{
// update code for Prop1, the value is stored in request.Prop1.Value
}
Json format
Simple. Not that extensive as "JSON Patch" standard, but covers all our needs.
{
"prop1": null, // will be skipped
// "prop2": null // skipped props also skipped as they will get default (null) value
"prop3": { "value": "test" } // value update requested
}
Properties
Simple contracts, simple logic
No serialization customization
Support for null values assignment
Covers any types: value, reference, complex custom types, whatever
At first I misunderstood the problem. As I was working with Xml I thought it was quite easy. Just add an attribute to the property and leave the property empty. But as I found out, Json doesn't work like that. Since I was looking for a solution that works for both xml and json, you'll find xml references in this answer. Another thing, I wrote this with a C# client in mind.
The first step is to create two classes for serialization.
public class ChangeType
{
[JsonProperty("#text")]
[XmlText]
public string Text { get; set; }
}
public class GenericChangeType<T> : ChangeType
{
}
I've chosen for a generic and a non-generic class because it is hard to cast to a generic type while this is not important. Also, for xml implementation it is necessary that XmlText is string.
XmlText is the actual value of the property. The advantage is that you can add attributes to this object and the fact that this is an object, not just string. In Xml it looks like: <Firstname>John</Firstname>
For Json this doesn't work. Json doesn't know attributes. So for Json this is just a class with properties. To implement the idea of the xml value (I will get to that later), I've renamed the property to #text. This is just a convention.
As XmlText is string (and we want to serialize to string), this is fine to store the value disregard the type. But in case of serialization, I want to know the actual type.
The drawback is that the viewmodel needs to reference these types, the advantage is that the properties are strongly typed for serialization:
public class CustomerViewModel
{
public GenericChangeType<int> Id { get; set; }
public ChangeType Firstname { get; set; }
public ChangeType Lastname { get; set; }
public ChangeType Reference { get; set; }
}
Suppose I set the values:
var customerViewModel = new CustomerViewModel
{
// Where int needs to be saved as string.
Id = new GenericeChangeType<int> { Text = "12" },
Firstname = new ChangeType { Text = "John" },
Lastname = new ChangeType { },
Reference = null // May also be omitted.
}
In xml this will look like:
<CustomerViewModel>
<Id>12</Id>
<Firstname>John</Firstname>
<Lastname />
</CustomerViewModel>
Which is enough for the server to detect the changes. But with json it will generate the following:
{
"id": { "#text": "12" },
"firstname": { "#text": "John" },
"lastname": { "#text": null }
}
It can work, because in my implementation the receiving viewmodel has the same definition. But since you are talking about serialization only and in case you use another implementation you would want:
{
"id": 12,
"firstname": "John",
"lastname": null
}
That is where we need to add a custom json converter to produce this result. The relevant code is in WriteJson, assuming you would add this converter to the serializer settings only. But for the sake of completeness I've added the readJson code as well.
public class ChangeTypeConverter : JsonConverter
{
public override bool CanConvert(Type objectType)
{
// This is important, we can use this converter for ChangeType only
return typeof(ChangeType).IsAssignableFrom(objectType);
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
var value = JToken.Load(reader);
// Types match, it can be deserialized without problems.
if (value.Type == JTokenType.Object)
return JsonConvert.DeserializeObject(value.ToString(), objectType);
// Convert to ChangeType and set the value, if not null:
var t = (ChangeType)Activator.CreateInstance(objectType);
if (value.Type != JTokenType.Null)
t.Text = value.ToString();
return t;
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
var d = value.GetType();
if (typeof(ChangeType).IsAssignableFrom(d))
{
var changeObject = (ChangeType)value;
// e.g. GenericChangeType<int>
if (value.GetType().IsGenericType)
{
try
{
// type - int
var type = value.GetType().GetGenericArguments()[0];
var c = Convert.ChangeType(changeObject.Text, type);
// write the int value
writer.WriteValue(c);
}
catch
{
// Ignore the exception, just write null.
writer.WriteNull();
}
}
else
{
// ChangeType object. Write the inner string (like xmlText value)
writer.WriteValue(changeObject.Text);
}
// Done writing.
return;
}
// Another object that is derived from ChangeType.
// Do not add the current converter here because this will result in a loop.
var s = new JsonSerializer
{
NullValueHandling = serializer.NullValueHandling,
DefaultValueHandling = serializer.DefaultValueHandling,
ContractResolver = serializer.ContractResolver
};
JToken.FromObject(value, s).WriteTo(writer);
}
}
At first I tried to add the converter to the class: [JsonConverter(ChangeTypeConverter)]. But the problem is that the converter will be used at all times, which creates a reference loop (as also mentioned in the comment in the code above). Also you may want to use this converter for serialization only. That is why I've added it to the serializer only:
var serializerSettings = new JsonSerializerSettings
{
NullValueHandling = NullValueHandling.Ignore,
DefaultValueHandling = DefaultValueHandling.IgnoreAndPopulate,
Converters = new List<JsonConverter> { new ChangeTypeConverter() },
ContractResolver = new Newtonsoft.Json.Serialization.CamelCasePropertyNamesContractResolver()
};
var s = JsonConvert.SerializeObject(customerViewModel, serializerSettings);
This will generate the json I was looking for and should be enough to let the server detect the changes.
-- update --
As this answer focusses on serialization, the most important thing is that lastname is part of the serialization string. It then depends on the receiving party how to deserialize the string into an object again.
Serialization and deserialization use different settings. In order to deserialize again you can use:
var deserializerSettings = new JsonSerializerSettings
{
//NullValueHandling = NullValueHandling.Ignore,
DefaultValueHandling = DefaultValueHandling.IgnoreAndPopulate,
Converters = new List<JsonConverter> { new Converters.NoChangeTypeConverter() },
ContractResolver = new Newtonsoft.Json.Serialization.CamelCasePropertyNamesContractResolver()
};
var obj = JsonConvert.DeserializeObject<CustomerViewModel>(s, deserializerSettings);
If you use the same classes for deserialization then Request.Lastname should be of ChangeType, with Text = null.
I'm not sure why removing the NullValueHandling from the deserialization settings causes problems in your case. But you can overcome this by writing an empty object as value instead of null. In the converter the current ReadJson can already handle this. But in WriteJson there has to be a modification. Instead of writer.WriteValue(changeObject.Text); you need something like:
if (changeObject.Text == null)
JToken.FromObject(new ChangeType(), s).WriteTo(writer);
else
writer.WriteValue(changeObject.Text);
This would result in:
{
"id": 12,
"firstname": "John",
"lastname": {}
}
Here's my quick and inexpensive solution...
public static ObjectType Patch<ObjectType>(ObjectType source, JObject document)
where ObjectType : class
{
JsonSerializerSettings settings = new JsonSerializerSettings
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
};
try
{
String currentEntry = JsonConvert.SerializeObject(source, settings);
JObject currentObj = JObject.Parse(currentEntry);
foreach (KeyValuePair<String, JToken> property in document)
{
currentObj[property.Key] = property.Value;
}
String updatedObj = currentObj.ToString();
return JsonConvert.DeserializeObject<ObjectType>(updatedObj);
}
catch (Exception ex)
{
throw ex;
}
}
When fetching the request body from your PATCH based method, make sure to take the argument as a type such as JObject. JObject during iteration returns a KeyValuePair struct which inherently simplifies the modification process. This allows you to get your request body content without receiving a deserialized result of your desired type.
This is beneficial due to the fact that you don't need any additional verification for nullified properties. If you want your values to be nullified that also works because the Patch<ObjectType>() method only loops through properties given in the partial JSON document.
With the Patch<ObjectType>() method, you only need to pass your source or target instance, and the partial JSON document that will update your object. This method will apply camelCase based contract resolver to prevent incompatible and inaccurate property names from being made. This method will then serialize your passed instance of a certain type and turned into a JObject.
The method then replaces all properties from the new JSON document to the current and serialized document without any unnecessary if statements.
The method stringifies the current document which now is modified, and deserializes the modified JSON document to your desired generic type.
If an exception occur, the method will simply throw it. Yes, it is rather unspecific but you are the programmer, you need to know what to expect...
This can all be done on a single and simple syntax with the following:
Entity entity = AtomicModifier.Patch<Entity>(entity, partialDocument);
This is what the operation would normally look like:
// Partial JSON document (originates from controller).
JObject newData = new { role = 9001 };
// Current entity from EF persistence medium.
User user = await context.Users.FindAsync(id);
// Output:
//
// Username : engineer-186f
// Role : 1
//
Debug.WriteLine($"Username : {0}", user.Username);
Debug.WriteLine($"Role : {0}", user.Role);
// Partially updated entity.
user = AtomicModifier.Patch<User>(user, newData);
// Output:
//
// Username : engineer-186f
// Role : 9001
//
Debug.WriteLine($"Username : {0}", user.Username);
Debug.WriteLine($"Role : {0}", user.Role);
// Setting the new values to the context.
context.Entry(user).State = EntityState.Modified;
This method will work well if you can correctly map your two documents with the camelCase contract resolver.
Enjoy...
Update
I updated the Patch<T>() method with the following code...
public static T PatchObject<T>(T source, JObject document) where T : class
{
Type type = typeof(T);
IDictionary<String, Object> dict =
type
.GetProperties()
.ToDictionary(e => e.Name, e => e.GetValue(source));
string json = document.ToString();
var patchedObject = JsonConvert.DeserializeObject<T>(json);
foreach (KeyValuePair<String, Object> pair in dict)
{
foreach (KeyValuePair<String, JToken> node in document)
{
string propertyName = char.ToUpper(node.Key[0]) +
node.Key.Substring(1);
if (propertyName == pair.Key)
{
PropertyInfo property = type.GetProperty(propertyName);
property.SetValue(source, property.GetValue(patchedObject));
break;
}
}
}
return source;
}
I know I'm a little bit late on this answer, but I think I have a solution that doesn't require having to change serialization and also doesn't include reflection (This article refers you to a JsonPatch library that someone wrote that uses reflection).
Basically create a generic class representing a property that could be patched
public class PatchProperty<T> where T : class
{
public bool Include { get; set; }
public T Value { get; set; }
}
And then create models representing the objects that you want to patch where each of the properties is a PatchProperty
public class CustomerPatchModel
{
public PatchProperty<string> FirstName { get; set; }
public PatchProperty<string> LastName { get; set; }
public PatchProperty<int> IntProperty { get; set; }
}
Then your WebApi method would look like
public void PatchCustomer(CustomerPatchModel customerPatchModel)
{
if (customerPatchModel.FirstName?.Include == true)
{
// update first name
string firstName = customerPatchModel.FirstName.Value;
}
if (customerPatchModel.LastName?.Include == true)
{
// update last name
string lastName = customerPatchModel.LastName.Value;
}
if (customerPatchModel.IntProperty?.Include == true)
{
// update int property
int intProperty = customerPatchModel.IntProperty.Value;
}
}
And you could send a request with some Json that looks like
{
"LastName": { "Include": true, "Value": null },
"OtherProperty": { "Include": true, "Value": 7 }
}
Then we would know to ignore FirstName but still set the other properties to null and 7 respectively.
Note that I haven't tested this and I'm not 100% sure it would work. It would basically rely on .NET's ability to serialize the generic PatchProperty. But since the properties on the model specify the type of the generic T, I would think it would be able to. Also since we have "where T : class" on the PatchProperty declaration, the Value should be nullable. I'd be interested to know if this actually works though. Worst case you could implement a StringPatchProperty, IntPatchProperty, etc. for all your property types.
I have such a class:
public class item
{
public string Name { get; set; }
public string City { get; set; }
public string Pw { get; set; }
}
from which I create several objects I store in the DB. Then I want to update one of them with data coming from client in the form of a json like this:
{
"Name":"John",
"City":"NYC"
}
the idea would be to use:
item myitem = JsonConvert.DeserializeObject<item>(jsoncomingfromclient);
but doing so Pw is overwritten with null (while obviously I want to keep the original value)
NullValueHandling looks like a good candidate but it works if the value is null, in my case it is completely missing from the json.
Any idea how to deserialize a json keeping the old value in the destination object if the value is missing in the json?
Use JsonConvert.PopulateObject. It's designed for this purpose:
var item = new item { Name = "my name", City = "my city", Pw = "my pw" };
var json = #"
{
""Name"":""John"",
""City"":""NYC""
}";
JsonConvert.PopulateObject(json, item);
Debug.Assert(item.Pw == "my pw"); // no assert
Debug.Assert(item.Name == "John"); // no assert
Debug.Assert(item.City == "NYC"); // no assert
This part of code JsonConvert.DeserializeObject<item>(jsoncomingfromclient); will create new instance of type item based on parameter jsoncomingfromclient and return it.
This part item myitem = ... declares a variable myitem of type item and gives it a concrete instance. So there is no way to merge anything like this.
You just have to write some merge method manually and define what and how is merged between two objects.
Something like that
item dbitem = ...
item myitem = JsonConvert.DeserializeObject<item>(jsoncomingfromclient);
item mergedItem = myitem.merge(dbitem)
I'm trying to grasp how Azure table storage works to create facebook-style feeds and I'm stuck on how to retrieve the entries.
(My questions is almost the same as https://stackoverflow.com/questions/6843689/retrieve-multiple-type-of-entities-from-azure-table-storage but the link in the answer is broken.)
This is my intended approach:
Create a personal feed for all users within my application which can contain different types of entries (notification, status update etc). My idea is to store them in an Azure Table grouped by a partition key for each user.
Retrieve all entries within the same partition key and pass it to different views depending on entry type.
How do I query the table storage for all types of the same base type
while keeping their unique properties?
The CloudTableQuery<TElement> requires a typed entity, if I specify EntryBase as generic argument I don't get the entry-specific properties (NotificationSpecificProperty, StatusUpdateSpecificProperty) and vice versa.
My entities:
public class EntryBase : TableServiceEntity
{
public EntryBase()
{
}
public EntryBase(string partitionKey, string rowKey)
{
this.PartitionKey = partitionKey;
this.RowKey = rowKey;
}
}
public class NotificationEntry : EntryBase
{
public string NotificationSpecificProperty { get; set; }
}
public class StatusUpdateEntry : EntryBase
{
public string StatusUpdateSpecificProperty { get; set; }
}
My query for a feed:
List<AbstractFeedEntry> entries = // how do I fetch all entries?
foreach (var item in entries)
{
if(item.GetType() == typeof(NotificationEntry)){
// handle notification
}else if(item.GetType() == typeof(StatusUpdateEntry)){
// handle status update
}
}
Finally there's a official way! :)
Look at the NoSQL sample which does exactly this in this link from the Azure Storage Team Blog:
Windows Azure Storage Client Library 2.0 Tables Deep Dive
There are a few ways to go about this and how you do it depends a bit on your personal preference as well as potentially performance goals.
Create an amalgamated class that represents all queried types. If I had StatusUpdateEntry and a NotificationEntry, then I would simply merge each property into a single class. The serializer will
automatically fill in the correct properties and leave the others null (or default). If you also put a 'type' property on the entity (calculated or set in storage), you could easily switch on that type. Since I always recommend mapping from table entity to your own type in the app, this works fine as well (the class only becomes used for DTO).
Example:
[DataServiceKey("PartitionKey", "RowKey")]
public class NoticeStatusUpdateEntry
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public string NoticeProperty { get; set; }
public string StatusUpdateProperty { get; set; }
public string Type
{
get
{
return String.IsNullOrEmpty(this.StatusUpdateProperty) ? "Notice" : "StatusUpate";
}
}
}
Override the serialization process. You can do this yourself by hooking the ReadingEntity event. It gives you the raw XML and you can choose to serialize however you want. Jai Haridas and Pablo Castro gave some example code for reading an entity when you don't know the type (included below), and you can adapt that to read specific types that you do know about.
The downside to both approaches is that you end up pulling more data than you need in some cases. You need to weigh this on how much you really want to query one type versus another. Keep in mind you can use projection now in Table storage, so that also reduces the wire format size and can really speed things up when you have larger entities or many to return. If you ever had the need to query only a single type, I would probably use part of the RowKey or PartitionKey to specify the type, which would then allow me to query only a single type at a time (you could use a property, but that is not as efficient for query purposes as PK or RK).
Edit: As noted by Lucifure, another great option is to design around it. Use multiple tables, query in parallel, etc. You need to trade that off with complexity around timeouts and error handling of course, but it is a viable and often good option as well depending on your needs.
Reading a Generic Entity:
[DataServiceKey("PartitionKey", "RowKey")]
public class GenericEntity
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
Dictionary<string, object> properties = new Dictionary<string, object>();
internal object this[string key]
{
get
{
return this.properties[key];
}
set
{
this.properties[key] = value;
}
}
public override string ToString()
{
// TODO: append each property
return "";
}
}
void TestGenericTable()
{
var ctx = CustomerDataContext.GetDataServiceContext();
ctx.IgnoreMissingProperties = true;
ctx.ReadingEntity += new EventHandler<ReadingWritingEntityEventArgs>(OnReadingEntity);
var customers = from o in ctx.CreateQuery<GenericTable>(CustomerDataContext.CustomersTableName) select o;
Console.WriteLine("Rows from '{0}'", CustomerDataContext.CustomersTableName);
foreach (GenericEntity entity in customers)
{
Console.WriteLine(entity.ToString());
}
}
// Credit goes to Pablo from ADO.NET Data Service team
public void OnReadingEntity(object sender, ReadingWritingEntityEventArgs args)
{
// TODO: Make these statics
XNamespace AtomNamespace = "http://www.w3.org/2005/Atom";
XNamespace AstoriaDataNamespace = "http://schemas.microsoft.com/ado/2007/08/dataservices";
XNamespace AstoriaMetadataNamespace = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata";
GenericEntity entity = args.Entity as GenericEntity;
if (entity == null)
{
return;
}
// read each property, type and value in the payload
var properties = args.Entity.GetType().GetProperties();
var q = from p in args.Data.Element(AtomNamespace + "content")
.Element(AstoriaMetadataNamespace + "properties")
.Elements()
where properties.All(pp => pp.Name != p.Name.LocalName)
select new
{
Name = p.Name.LocalName,
IsNull = string.Equals("true", p.Attribute(AstoriaMetadataNamespace + "null") == null ? null : p.Attribute(AstoriaMetadataNamespace + "null").Value, StringComparison.OrdinalIgnoreCase),
TypeName = p.Attribute(AstoriaMetadataNamespace + "type") == null ? null : p.Attribute(AstoriaMetadataNamespace + "type").Value,
p.Value
};
foreach (var dp in q)
{
entity[dp.Name] = GetTypedEdmValue(dp.TypeName, dp.Value, dp.IsNull);
}
}
private static object GetTypedEdmValue(string type, string value, bool isnull)
{
if (isnull) return null;
if (string.IsNullOrEmpty(type)) return value;
switch (type)
{
case "Edm.String": return value;
case "Edm.Byte": return Convert.ChangeType(value, typeof(byte));
case "Edm.SByte": return Convert.ChangeType(value, typeof(sbyte));
case "Edm.Int16": return Convert.ChangeType(value, typeof(short));
case "Edm.Int32": return Convert.ChangeType(value, typeof(int));
case "Edm.Int64": return Convert.ChangeType(value, typeof(long));
case "Edm.Double": return Convert.ChangeType(value, typeof(double));
case "Edm.Single": return Convert.ChangeType(value, typeof(float));
case "Edm.Boolean": return Convert.ChangeType(value, typeof(bool));
case "Edm.Decimal": return Convert.ChangeType(value, typeof(decimal));
case "Edm.DateTime": return XmlConvert.ToDateTime(value, XmlDateTimeSerializationMode.RoundtripKind);
case "Edm.Binary": return Convert.FromBase64String(value);
case "Edm.Guid": return new Guid(value);
default: throw new NotSupportedException("Not supported type " + type);
}
}
Another option, of course, is to have only a single entity type per table, query the tables in parallel and merge the result sorted by timestamp.
In the long run this may prove to be the more prudent choice with reference to scalability and maintainability.
Alternatively you would need to use some flavor of generic entities as outlined by ‘dunnry’, where the non-common data is not explicitly typed and instead persisted via a dictionary.
I have written an alternate Azure table storage client, Lucifure Stash, which supports additional abstractions over azure table storage including persisting to/from a dictionary, and may work in your situation if that is the direction you want to pursue.
Lucifure Stash supports large data columns > 64K, arrays & lists, enumerations, composite keys, out of the box serialization, user defined morphing, public and private properties and fields and more. It is available free for personal use at http://www.lucifure.com or via NuGet.com.
Edit: Now open sourced at CodePlex
Use DynamicTableEntity as the entity type in your queries. It has a dictionary of properties you can look up. It can return any entity type.