I was wondering if anyone could point me in the right direction for solving easily the following problem:
Suppose I have a Player class in my .NET code, which looks like this:
public class Player
{
public int Id { get; set; }
public string Name { get; set; }
public long Score { get; set; }
}
I need to serialize this class into a JSON string (using JSON.NET) and use it when POST-ing to a web service. However, the thing is that some of the service's endpoints explicitly prohibit in the JSON string the occurrence of certain members. For instance, a "post score" endpoint would allow all the 3 members to be included in the string, while a "register player" endpoint would only allow Id and Name to be present (otherwise, a bad request is thrown back to the client). Now I know that I could make 2 different classes (e.g. Player and CompetitivePlayer), each containing the required (sub)set of members, however for practical purposes let's suppose I can't do this or want to avoid this (my actual data objects are more complex than the Player class given here simply as an example).
So what I actually want is to tell the JSON serializer at runtime that only certain members of an object must be serialized in situation X, while in situation Y a whole different subset is to be serialized. At first I thought that implementing my own ContractResolver would help, but as it turns out this is only called once per object's type, not per object itself when serializing it. Now the only solution I can think of is to subclass JSONSerializer and have it use a JSONWriter that ignores the properties whose names are included in a list of strings given as argument, perhaps - although I'm not quite sure if this plan can work. Is there a simpler solution for what I'm trying to achieve?
Ok, after looking over the JSON.NET source code I found the exact spot that prevented me from custom serializing those properties: the CamelCasePropertyNamesContractResolver class.
Before writing the original question, I tried to implement custom serialization as described here, at the IContractResolver section. However, instead of inheriting directly from DefaultContractResolver I used CamelCasePropertyNamesContractResolver (I needed camel case here) which, after looking inside its code, sets a "share cache" flag that prevented some of its methods from being called, for performance reasons (which I'm willing to sacrifice in this scenario). Thus, CreateProperties() was called only once per object type, instead of every time my object needed to be serialized. So now my contract resolver class looks like this:
class OptionalPropertiesContractResolver : DefaultContractResolver
{
//only the properties whose names are included in this list will be serialized
IEnumerable<string> _includedProperties;
public OptionalPropertiesContractResolver(IEnumerable<string> includedProperties)
{
_includedProperties = includedProperties;
}
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
return (from prop in base.CreateProperties(type, memberSerialization)
where _includedProperties.Contains(prop.PropertyName)
select prop).ToList();
}
protected override string ResolvePropertyName(string propertyName)
{
// lower case the first letter of the passed in name
return ToCamelCase(propertyName);
}
static string ToCamelCase(string s)
{
//camel case implementation
}
}
Just wanted to let others know of this particular situation in case they ever come across it.
I would create contract classes and use AutoMapper to map to them from Player class and serialize them as needed. I.e. 'PlayerContract', 'CompetitivePlayerContract', etc.
It doesn't really matter that those classes only represent contracts to your service.
Related
On our API we need to take in json, deserialize it to an interface, set a field, and ship it off. To achieve this, on both ends I'm setting the jsonConvert to use TypeNameHandling.All. The endpoint in question is supposed to be fairly locked down, but there's always a chance of someone gaining access and setting $type to a system class with a dangerous constructor or garbage collection method.
My question is would clarifying the namespace of the type before attempting to deserialize it be sufficiently safe? or would there still be a risk of having something like a sub-object with a dangerous class type in the json? If there is still a risk or an exploit I've missed, what other steps can I do to mitigate the danger?
Our company name is at the start of every namespace we use, so in the code below we just check that the type set in the json starts with our company name. The {} at the start is just so the compiler knows it doesn't need to keep the JObject in memory after the check.
{ //check the type is valid
var securityType = JsonConvert.DeserializeObject<JObject>(request.requestJson);
JToken type;
if (securityType.TryGetValue("$type", out type))
{
if (!type.ToString().ToLower().StartsWith("foo")) { //'foo' is our company name, all our namespaces start with foo
await logError($"Possible security violation, client tried to instantiate {type}", clientId: ClientId);
throw new Exception($"Request type {type} not supported, please use an IFoo");
}
}
else
{
throw new Exception("set a type...");
}
}
IFoo requestObject = JsonConvert.DeserializeObject<IFoo>(request.requestJson, new JsonSerializerSettings()
{
TypeNameHandling = TypeNameHandling.All
});
The risk with TypeNameHandling is that an attacker may trick the receiver into constructing an attack gadget - an instance of a type that when constructed, populated or disposed effects an attack on the receiving system. For an overview see
TypeNameHandling caution in Newtonsoft Json
External json vulnerable because of Json.Net TypeNameHandling auto?
If you are going to protect against such attacks by requiring all deserialized types to be in your own company's .Net namespace, be aware that, when serializing with TypeNameHandling.All, "$type" information will appear throughout the JSON token hierarchy, for all arrays and objects (including for .Net types such as List<T>). As such you must needs apply your "$type" check everywhere type information might occur. The easiest way to do this is with a custom serialization binder such as the following:
public class MySerializationBinder : DefaultSerializationBinder
{
const string MyNamespace = "foo"; //'foo' is our company name, all our namespaces start with foo
public override Type BindToType(string assemblyName, string typeName)
{
if (!typeName.StartsWith(MyNamespace, StringComparison.OrdinalIgnoreCase))
throw new JsonSerializationException($"Request type {typeName} not supported, please use an IFoo");
var type = base.BindToType(assemblyName, typeName);
return type;
}
}
Which can be used as follows:
var settings = new JsonSerializerSettings
{
SerializationBinder = new MySerializationBinder(),
TypeNameHandling = TypeNameHandling.All,
};
This has the added advantage of being more performant than your solution since pre-loading into a JObject is no longer required.
However, having done so, you may encounter the following issues:
Even if the root object is always from your company's namespace, the "$type" properties for nested values may not necessarily be in your companies namespace. Specifically, type information for harmless generic system collections such as List<T> and Dictionary<TKey, Value> as well as arrays will be included. You may need to enhance BindToType() to whitelist such types.
Serializing with TypeNameHandling.Objects or TypeNameHandling.Auto can ameliorate the need to whitelist such harmless system types, as type information for such system types is less likely to get included during serialization as compared to TypeNameHandling.All.
To further simplify the type checking as well as to reduce your attack surface overall, you might consider only allowing type information on the root object. To do that, see json.net - how to add property $type ONLY on root object. SuppressItemTypeNameContractResolver from the accepted answer can be used on the receiving side as well as the sending side, to ignore type information on non-root objects.
Alternatively, you could serialize and deserialize with TypeNameHandling.None globally and wrap your root object in a container marked with [JsonProperty(TypeNameHandling = TypeNameHandling.Auto)] like so:
public class Root<TBase>
{
[JsonProperty(TypeNameHandling = TypeNameHandling.Auto)]
public TBase Data { get; set; }
}
Since your root objects all seem to implement some interface IFoo you would serialize and deserialize a Root<IFoo> which would restrict the space of possible attack gadgets to classes implementing IFoo -- a much smaller attack surface.
Demo fiddle here.
When deserializing generics, both the outer generic and the inner generic parameter types may need to be sanitized recursively. For instance, if your namespace contains a Generic<T> then checking that the typeName begins with your company's namespace will not protect against an attack via a Generic<SomeAttackGadget>.
Even if you only allow types from your own namespace, it's hard to say that's enough to be sufficiently safe, because we don't know whether any of the classes in your own namespace might be repurposed as attack gadgets.
When you make a new WCF project, sample service is generated for you. The default data contract is (I've just changed the string type field title):
[DataContract]
public class CompositeType
{
bool boolValue = true;
string name = "";
[DataMember]
public bool BoolValue
{
get { return boolValue; }
set { boolValue = value; }
}
[DataMember]
public string Name
{
get { return name; }
set { name = value; }
}
}
What is the point of having those private fields boolValue and name? Is it a good practice writing some data sanitizing or some other manipulations in contract, thus bloating it? It seems the only sane reason for me not writing to fields directly. So is it a bloatware or it has some reason behind it?
In my opinion, DataContracts singular purpose should be to transfer data between domains. Validation/sanitizing logic should be outside the DataContract's responsibilities. Especially if the intent is to share/link the code file in multiple projects/platforms for reuse.
This also implies that you shouldn't have your DataContract object used elsewhere in your application. It should go through some kind of adapter or converter to read/write the content to your application-specific objects. Its in that conversion (or your application objects) where you can do some validation. The simpler your data-transfer layer the better.
Plausibly, you might add logging/debugging code in the setters/getters (preferably temporarily) to track data input/output as needed. So far, that's the only case I've felt OK to put anything other than simple properties in a DataContract object (and again, I only did so temporarily).
EDIT: As to why this is the default generated file, I'm not sure. My DataContract objects are always using automatic properties. I'd suggest maybe this was a throwback to .NET 2.0 before automatic properties were introduced, but WCF/DataContracts weren't introduced until 3.0 anyway.
The same reason you ever write a getter and setter for any private value, to help aid encapsulation and allow you to manipulate the inner workings of your class without having to worry about outside members breaking since they were manipulating variables directly.
The short answer is that the public properties allow you, the designer, to restrict the values before they can be assigned to your private fields, potentially saving you from dealing with unexpected data. Although most get and set methods are the same, they are frequently the first line of defense against bad data.
This question already has answers here:
Can I optionally turn off the JsonIgnore attribute at runtime?
(3 answers)
Closed 4 years ago.
I am currently using the same C# DTOs to pull data out of CouchDB, via LoveSeat which I am going to return JSON via an ASP MVC controller.
I am using the NewtonSoft library to seralise my DTOs before sending them down through the controller.
However, as CouchDB also uses NewtonSoft it is also respecting the property level NewtonSoft attributes such as
[JsonIgnore]
[JsonProperty("foo")]
Is there anyway to tell the newtonsoft library to ignore these attributes explicitly? LoveSeat allows me to provide my own implementation of IObjectSerializer, which gives me full control over netwonsofts JsonSerializerSettings. So, can I ignore the attributes by using those settings ?
I ask as the only alternative I can see at this point, is to dupe my DTOs. While not that's not terrible, it isn't great either.
The only other way I can see is to bring in my own version of the Newtonsoft.Json source into my project, with a different assembly name etc etc. But this way madness definitely lies and I will just dupe the DTOs before I go down this road.
I'm not sure if this is what you're after, but from what I understand you're looking for the [JsonIgnore] attribute. Stops properties from being serialized with the rest of the object into to JSON.
[JsonIgnore]
public string Whatever{ get; set; }
One suggestion that you may not like. For best practices, I recommend having two almost identical objects. One specifically for your Data Access Layer (Domain Object) which maps to your DB. And a separate DTO that your apps care about. This way the Domain Object will mostly contain more properties than the DTO and you can separate the concerns.
According to Json.NET documentation
You can add method to your class: public bool ShouldSerialize_________(){...} and fill in the blank with the name of the property you don't want to serialize. If the method returns false, the property will be ignored.
The example from the documentation doesn't want to serialize an employee's manager if the manager is the same employee.
public class Employee
{
public string Name { get; set; }
public Employee Manager { get; set; }
public bool ShouldSerializeManager()
{
// don't serialize the Manager property if an employee is their own manager
return (Manager != this);
}
}
You could put some kind of inhibit setting on your class:
public class DTO
{
[JsonIgnore]
public bool IsWritingToDatabase { get; set; }
public string AlwaysSerialize { get; set; }
public string Optional { get; set; }
public bool ShouldSerializeOptional()
{
return IsWritingToDatabase;
}
}
But, this isn't much simpler than having two objects. So I would recommend doing as #zbugs says, and having separate definitions for API-side and DB-side.
I ended up making all properties I needed to only add attributes to virtual, and overriding them alone in another class, with the relevant newtonsoft attributes.
This allows me to have different serialisation behavior when de-serialising from CouchDB and serialising for a GET, without too much dupe. It is fine, and a bonus, that the two are coupled; any changes in the base i would want anyway.
It would still be nice to know if my original question is possible?
This newtonking.com link helped in a similar situation. It extends the DefaultContractResolver class. To make it work I had to replace
protected override IList<JsonProperty> CreateProperties(JsonObjectContract contract)
with
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
In my music/rhythm game, I using serialization in order to save user-created simfiles (think music tabs or notecharts). Nothing too earth-shattering there. But, I'm using a DataContract in order to perform the serialization, because:
1) I need private and protected fields serialized as well. I don't care if they're visible, mainly due to...
2) I'd like for the user to be able to edit the serialized file in his/her favorite text editor and be able to load these into the game (these files will be used to represent musical notes in the game, think StepMania simfiles).
One of my custom datatypes I'd like to serialize is a Fraction class I've created:
using System.Runtime.Serialization;
namespace Fractions {
[DataContract(Namespace="")] // Don't need the namespaces.
public sealed class Fraction {
// NOTE THAT THESE ARE "READONLY"!
[DataMember(Name="Num", Order=1)] private readonly long numer;
[DataMember(Name="Den", Order=2)] private readonly long denom;
// ...LOTS OF STUFF...
public static Fraction FromString(string str) {
// Try and parse string and create a Fraction from it, and return it.
// This is static because I'm returning a new created Fraction.
}
public override ToString() {
return numer.ToString() + "/" + denom.ToString();
}
}
}
Testing this, it works decently, serializing into an XML fragment of the form:
<Fraction>
<Num>(INT64 VALUE AS STRING)</Num>
<Den>(INT64 VALUE AS STRING)</Den>
</Fraction>
Now, I could just be happy with this and go on my merry coding way. But I'm picky.
My end users will probably not be super familiar with XML, and there's a LOT of more complex datatypes in my game that will include a lot of Fractions, so I'd much rather prefer to be able to represent a Fraction in the XML simfile as such (much more concisely):
<Fraction>(NUMERATOR)/(DENOMINATOR)</Fraction>
However, I'm at a loss as to how to do this without breaking automatic (de)serialization. I looked into the IXmlSerializable interface, but I was shut down by the fact that my datatype needed to be mutable in order for it to work (ReadXml() doesn't return a new Fraction object, but instead seems to flash-instantiate one, and you have to fill in the values manually, which doesn't work due to the readonly). Using the OnSerializing and OnDeserialized attributes didn't work either for the same reason. I'd REALLY prefer to keep my Fraction class immutable.
I'm guessing there's a standard procedure by which primitives are converted to/from strings when serializing to XML. Any numeric primitives, for instance, would have to be converted to/from strings upon serializing/deserializing. Is there any way for me to be able to add this sort of automatic string from/to conversion to my Fraction type? If it were possible, I'd imagine the serializing procedure would look something like this:
1) Serialize this complex datatype which contains Fraction fields.
2) Start serializing the [DataMember] fields.
3) Hey, this field is a Fraction object. Fractions are able to represent themselves fully as a string. Write that string out to the XML directly, instead of diving into the Fraction object and writing out all its fields.
...
Deserialization would work the opposite way:
1) Deserialize this data, we're expecting so-and-so data type.
2) Start deserializing fields.
3) Oh look, a Fraction object, I can just go ahead and read the content string, convert that string into a new Fraction object, and return the newly-created Fraction.
...
Is there anything I can do to accomplish this? Thanks!
EDIT: Data Contract Surrogates seem like the way to go, but for the life of me I can't seem to understand them or have them work in my game. Or rather, they add some nasty automatic namespace and ID fields to my serialized elements.
I guess that you can probably use Data Contract Surrogates.
But even simpler way would be to have a private string member fractionString within your type that will represent the string representation of your Type. You have to initialize it only during object construction (as your type is immutable). Now you can skip num and den from serialization and mark your fractionString field as DataMember. Downside to the approach is additional space consumption.
public sealed class Fraction {
private readonly long numer;
private readonly long denom;
[DataMember(Name="Fraction")
private string fractionString;
EDIT: Never mind, just re-read what you want and realized above won't work.
I had a similar problem, though in my case I was using the Type-Safe-Enumeration pattern. In each case, when you write your DataContract, you are specifying in the elements containing the non-simple data type C# decides to make the class into an element and then look to the class for that data contract.
Unfortunately, that is not what either of us wants.
My solution was two part:
1) In the complex class we want to include (Fraction in your case) provide methods to serialize and deserialize the object into a string. (I used the cast operators because that carries the meaning best to me):
class Complex
{
...
// NOTE: that this can be implicit since every Complex generates a valid string
public static implicit operator string(Complex value)
{
return <... code to generate a string from the Complex Type...>;
}
// NOTE: this must be explicit since it can throw an exception because not all
// strings are valid Complex types
public static explicit operator Complex(string value)
{
return <... code to validate and create a Complex object from a string ...>;
}
...
}
2) Now, when you use a Complex type object in another class, you define the DataContract with a string property and use a backing value of the actual Complex type:
[DataContract]
class User
{
...
[DataMember]
public string MyComplex
{
get { return m_MyComplex; }
set { m_myComplex = (Complex)value; }
}
// NOTE that this member is _not_ part of the DataContract
Complex m_myComplex;
...
}
While I admit it is not an ideal solution, requiring the presence of additional code in the using class, it does allow arbitrary string representation of a complex class within the DataContract format without additional layers that would otherwise be necessary.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have over the course of a few projects developed a pattern for creating immutable (readonly) objects and immutable object graphs. Immutable objects carry the benefit of being 100% thread safe and can therefore be reused across threads. In my work I very often use this pattern in Web applications for configuration settings and other objects that I load and cache in memory. Cached objects should always be immutable as you want to guarantee they are not unexpectedly changed.
Now, you can of course easily design immutable objects as in the following example:
public class SampleElement
{
private Guid id;
private string name;
public SampleElement(Guid id, string name)
{
this.id = id;
this.name = name;
}
public Guid Id
{
get { return id; }
}
public string Name
{
get { return name; }
}
}
This is fine for simple classes - but for more complex classes I do not fancy the concept of passing all values through a constructor. Having setters on the properties is more desirable and your code constructing a new object gets easier to read.
So how do you create immutable objects with setters?
Well, in my pattern objects start out as being fully mutable until you freeze them with a single method call. Once an object is frozen it will stay immutable forever - it cannot be turned into a mutable object again. If you need a mutable version of the object, you simply clone it.
Ok, now on to some code. I have in the following code snippets tried to boil the pattern down to its simplest form. The IElement is the base interface that all immutable objects must ultimately implement.
public interface IElement : ICloneable
{
bool IsReadOnly { get; }
void MakeReadOnly();
}
The Element class is the default implementation of the IElement interface:
public abstract class Element : IElement
{
private bool immutable;
public bool IsReadOnly
{
get { return immutable; }
}
public virtual void MakeReadOnly()
{
immutable = true;
}
protected virtual void FailIfImmutable()
{
if (immutable) throw new ImmutableElementException(this);
}
...
}
Let's refactor the SampleElement class above to implement the immutable object pattern:
public class SampleElement : Element
{
private Guid id;
private string name;
public SampleElement() {}
public Guid Id
{
get
{
return id;
}
set
{
FailIfImmutable();
id = value;
}
}
public string Name
{
get
{
return name;
}
set
{
FailIfImmutable();
name = value;
}
}
}
You can now change the Id property and the Name property as long as the object has not been marked as immutable by calling the MakeReadOnly() method. Once it is immutable, calling a setter will yield an ImmutableElementException.
Final note:
The full pattern is more complex than the code snippets shown here. It also contains support for collections of immutable objects and complete object graphs of immutable object graphs. The full pattern enables you to turn an entire object graph immutable by calling the MakeReadOnly() method on the outermost object. Once you start creating larger object models using this pattern the risk of leaky objects increases. A leaky object is an object that fails to call the FailIfImmutable() method before making a change to the object. To test for leaks I have also developed a generic leak detector class for use in unit tests. It uses reflection to test if all properties and methods throw the ImmutableElementException in the immutable state.
In other words TDD is used here.
I have grown to like this pattern a lot and find great benefits in it. So what I would like to know is if any of you are using similar patterns? If yes, do you know of any good resources that document it? I am essentially looking for potential improvements and for any standards that might already exist on this topic.
For info, the second approach is called "popsicle immutability".
Eric Lippert has a series of blog entries on immutability starting here. I'm still getting to grips with the CTP (C# 4.0), but it looks interesting what optional / named parameters (to the .ctor) might do here (when mapped to readonly fields)...
[update: I've blogged on this here]
For info, I probably wouldn't make those methods virtual - we probably don't want subclasses being able to make it non-freezable. If you want them to be able to add extra code, I'd suggest something like:
[public|protected] void Freeze()
{
if(!frozen)
{
frozen = true;
OnFrozen();
}
}
protected virtual void OnFrozen() {} // subclass can add code here.
Also - AOP (such as PostSharp) might be a viable option for adding all those ThrowIfFrozen() checks.
(apologies if I have changed terminology / method names - SO doesn't keep the original post visible when composing replies)
Another option would be to create some kind of Builder class.
For an example, in Java (and C# and many other languages) String is immutable. If you want to do multiple operations to create a String you use a StringBuilder. This is mutable, and then once you're done you have it return to you the final String object. From then on it's immutable.
You could do something similar for your other classes. You have your immutable Element, and then an ElementBuilder. All the builder would do is store the options you set, then when you finalize it it constructs and returns the immutable Element.
It's a little more code, but I think it's cleaner than having setters on a class that's supposed to be immutable.
After my initial discomfort about the fact that I had to create a new System.Drawing.Point on each modification, I've wholly embraced the concept some years ago. In fact, I now create every field as readonly by default and only change it to be mutable if there's a compelling reason – which there is surprisingly rarely.
I don't care very much about cross-threading issues, though (I rarely use code where this is relevant). I just find it much, much better because of the semantic expressiveness. Immutability is the very epitome of an interface which is hard to use incorrectly.
You are still dealing with state, and thus can still be bitten if your objects are parallelized before being made immutable.
A more functional way might be to return a new instance of the object with each setter. Or create a mutable object and pass that in to the constructor.
The (relatively) new Software Design paradigm called Domain Driven design, makes the distinction between entity objects and value objects.
Entity Objects are defined as anything that has to map to a key-driven object in a persistent data store, like an employee, or a client, or an invoice, etc... where changing the properties of the object implies that you need to save the change to a data store somewhere, and the existence of multiple instances of a class with the same "key" imnplies a need to synchronize them, or coordinate their persistence to the data store so that one instance' changes do not overwrite the others. Changing the properties of an entity object implies you are changing something about the object - not changing WHICH object you are referencing...
Value objects otoh, are objects that can be considered immutable, whose utility is defined strictly by their property values, and for which multiple instances, do not need to be coordinated in any way... like addresses, or telephone numbers, or the wheels on a car, or the letters in a document... these things are totally defined by their properties... an uppercase 'A' object in an text editor can be interchanged transparently with any other uppercase 'A' object throughout the document, you don't need a key to distinguish it from all the other 'A's In this sense it is immutable, because if you change it to a 'B' (just like changing the phone number string in a phone number object, you are not changing the data associated with some mutable entity, you are switching from one value to another... just as when you change the value of a string...
Expanding on the point by #Cory Foy and #Charles Bretana where there is a difference between entities and values. Whereas value-objects should always be immutable, I really don't think that an object should be able to freeze themselves, or allow themselves to be frozen arbitrarily in the codebase. It has a really bad smell to it, and I worry that it could get hard to track down where exactly an object was frozen, and why it was frozen, and the fact that between calls to an object it could change state from thawed to frozen.
That isn't to say that sometimes you want to give a (mutable) entity to something and ensure it isn't going to be changed.
So, instead of freezing the object itself, another possibility is to copy the semantics of ReadOnlyCollection< T >
List<int> list = new List<int> { 1, 2, 3};
ReadOnlyCollection<int> readOnlyList = list.AsReadOnly();
Your object can take a part as mutable when it needs it, and then be immutable when you desire it to be.
Note that ReadOnlyCollection< T > also implements ICollection< T > which has an Add( T item) method in the interface. However there is also bool IsReadOnly { get; } defined in the interface so that consumers can check before calling a method that will throw an exception.
The difference is that you can't just set IsReadOnly to false. A collection either is or isn't read only, and that never changes for the lifetime of the collection.
It would be nice at time to have the const-correctness that C++ gives you at compile time, but that starts to have it's own set of problems and I'm glad C# doesn't go there.
ICloneable - I thought I'd just refer back to the following:
Do not implement ICloneable
Do not use ICloneable in public APIs
Brad Abrams - Design Guidelines, Managed code and the .NET Framework
System.String is a good example of a immutable class with setters and mutating methods, only that each mutating method returns a new instance.
This is an important problem, and I've love to see more direct framework/language support to solve it. The solution you have requires a lot of boilerplate. It might be simple to automate some of the boilerplate by using code generation.
You'd generate a partial class that contains all the freezable properties. It would be fairly simple to make a reusable T4 template for this.
The template would take this for input:
namespace
class name
list of property name/type tuples
And would output a C# file, containing:
namespace declaration
partial class
each of the properties, with the corresponding types, a backing field, a getter, and a setter which invokes the FailIfFrozen method
AOP tags on freezable properties could also work, but it would require more dependencies, whereas T4 is built into newer versions of Visual Studio.
Another scenario which is very much like this is the INotifyPropertyChanged interface. Solutions for that problem are likely to be applicable to this problem.
My problem with this pattern is that you're not imposing any compile-time restraints upon immutability. The coder is responsible for making sure an object is set to immutable before for example adding it to a cache or another non-thread-safe structure.
That's why I would extend this coding pattern with a compile-time restraint in the form of a generic class, like this:
public class Immutable<T> where T : IElement
{
private T value;
public Immutable(T mutable)
{
this.value = (T) mutable.Clone();
this.value.MakeReadOnly();
}
public T Value
{
get
{
return this.value;
}
}
public static implicit operator Immutable<T>(T mutable)
{
return new Immutable<T>(mutable);
}
public static implicit operator T(Immutable<T> immutable)
{
return immutable.value;
}
}
Here's a sample how you would use this:
// All elements of this list are guaranteed to be immutable
List<Immutable<SampleElement>> elements =
new List<Immutable<SampleElement>>();
for (int i = 1; i < 10; i++)
{
SampleElement newElement = new SampleElement();
newElement.Id = Guid.NewGuid();
newElement.Name = "Sample" + i.ToString();
// The compiler will automatically convert to Immutable<SampleElement> for you
// because of the implicit conversion operator
elements.Add(newElement);
}
foreach (SampleElement element in elements)
Console.Out.WriteLine(element.Name);
elements[3].Value.Id = Guid.NewGuid(); // This will throw an ImmutableElementException
Just a tip to simplify the element properties: Use automatic properties with private set and avoid explicitly declaring the data field. e.g.
public class SampleElement {
public SampleElement(Guid id, string name) {
Id = id;
Name = name;
}
public Guid Id {
get; private set;
}
public string Name {
get; private set;
}
}
Here is a new video on Channel 9 where Anders Hejlsberg from 36:30 in the interview starts talking about immutability in C#. He gives a very good use case for popsicle immutability and explains how this is something you are currently required to implement yourself. It was music to my ears hearing him say it is worth thinking about better support for creating immutable object graphs in future versions of C#
Expert to Expert: Anders Hejlsberg - The Future of C#
Two other options for your particular problem that haven't been discussed:
Build your own deserializer, one that can call a private property setter. While the effort in building the deserializer at the beginning will be much more, it makes things cleaner. The compiler will keep you from even attempting to call the setters and the code in your classes will be easier to read.
Put a constructor in each class that takes an XElement (or some other flavor of XML object model) and populates itself from it. Obviously as the number of classes increases, this quickly becomes less desirable as a solution.
How about having an abstract class ThingBase, with subclasses MutableThing and ImmutableThing? ThingBase would contain all the data in a protected structure, providing public read-only properties for the fields and protected read-only property for its structure. It would also provide an overridable AsImmutable method which would return an ImmutableThing.
MutableThing would shadow the properties with read/write properties, and provide both a default constructor and a constructor that accepts a ThingBase.
Immutable thing would be a sealed class that overrides AsImmutable to simply return itself. It would also provide a constructor that accepts a ThingBase.
I dont like the idea of being able to change an object from a mutable to an immutable state, that kind of seems to defeat the point of design to me. When are you needing to do that? Only objects which represent VALUES should be immutable
You can use optional named arguments together with nullables to make an immutable setter with very little boilerplate. If you really do want to set a property to null then you may have some more troubles.
class Foo{
...
public Foo
Set
( double? majorBar=null
, double? minorBar=null
, int? cats=null
, double? dogs=null)
{
return new Foo
( majorBar ?? MajorBar
, minorBar ?? MinorBar
, cats ?? Cats
, dogs ?? Dogs);
}
public Foo
( double R
, double r
, int l
, double e
)
{
....
}
}
You would use it like so
var f = new Foo(10,20,30,40);
var g = f.Set(cat:99);