I'm using Automapper to clone an object. I have a class which contains collections that I want handled in a non-standard way, which I'll explain below (I've stripped out a bunch of code to highlight the specific issue):
public class CommunityModel
{
private readonly IUIManaer _uiMgr;
private readonly IMapper _mapper;
private ValidatedCollection<CommunityUserModel, string> _users;
private int _communityIndex = -1;
public CommunityModel( IUIManager uiMgr, IMapper mapper, IElementManager<CommunityUserModel, string> userMgr )
{
_uiMgr = uiMgr ?? throw new NullReferenceException( nameof(uiMgr) );
_mapper = mapper ?? throw new NullReferenceException( nameof(mapper) );
Users = new ValidatedCollection<CommunityUserModel, string>( userMgr );
}
public int CommunityIndex
{
get => _communityIndex;
set
{
if (value < -1) value = -1;
Set(ref _communityIndex, value);
IsNew = value < 0;
}
}
public ValidatedCollection<CommunityModel, string> Collection { get; set; }
public ValidatedCollection<CommunityUserModel, string> Users
{
get => _users;
set
{
ChangeTracker.RegisterCollection(value);
SetAndValidate( ref _users, value );
}
}
}
ValidatedCollection<> is an extension of WPF's ObservableCollection. The CommunityIndex property uniquely identifies a CommunityModel instance. This allows me to use the Automapper.Collection extensions via the EqualityComparison() extension method.
I don't want Automapper to initialize the Users collection, because it gets initialized in the class constructor. But I want the elements of the User collection to be cloned from the source Users collection to the destination Users collection.
The Collection collection contains a list of CommunityModel objects, including the instance which has the Collection property (i.e., Collection is a set of sibling CommunityModel instances). I'd like Automapper to initialize the collection, and, ultimately, copy all of the CommunityModel siblings other than the source object to that collection, and then add the destination object to the collection (i.e., in the end, Collection will be a set of sibling CommunityModel objects, with the source CommunityModel replaced by the destination CommunityModel).
My map is currently defined as follows:
CreateMap<CommunityModel, CommunityModel>()
.ForMember(dest=>dest.Users, opt=>opt.Ignore())
.ForMember(dest=>dest.Collection, opt=>opt.Ignore() )
.EqualityComparison((x,y)=> x.CommunityIndex == y.CommunityIndex);
CreateMap<CommunityUserModel, CommunityUserModel>()
.EqualityComparison((x,y) => x.UserIndex == y.UserIndex);
If I don't Ignore() the Users collection, Automapper will initialize the collection, which overrides the required configuration of Users set in the constructor and causes problems elsewhere in my app. But if Ignore() the Users collection, its elements are never cloned from source to destination. What I want to do is not have Automapper initialize Users, but still clone the contents.
If I don't ignore the Collection collection, I get an infinite loop and stack overflow, I believe because cloning Collection's elements involves creating an instance of the CommunityModel which owns the Collection property, which should be a reference to the destination object being created, but instead leads to the creation of another identical destination object. What I'd like to do is have Automapper initialize the Collection collection, but >>not<< clone the source elements, which I guess I'd have to do later in an AfterMap() call.
I realize this is somewhat arcane, but the overall design of my project results in these requirements.
What is the best way of doing this within Automapper? Should I look into creating a custom value resolver, even though I'm cloning an object, so the property names are identical between source and destination?
I'm going to describe how I resolved my issue, but I'm not going to mark it as an answer, because I'm not familiar enough with Automapper to know if what I think it's doing is what it's actually doing.
What appears to be happening, when Automapper maps collections -- even with the Automapper.Collections library installed and activated -- is that collections are deemed to be "different", even if the types of the elements in the source and destination collections can be automatically mapped, if the source and destination collection types are different.
For example, if the source Community object has a List<> of CommunityUsers:
public class Community
{
public string Name { get; set; }
public string SiteUrl { get; set; }
public string LoginUrl { get; set; }
public List<CommunityUser> Users { get; set; }
}
public class CommunityUser
{
public string UserID { get; set; }
public string VaultKeyName { get; set; }
}
and you want to map it to destination objects like this:
public class CommunityModel
{
// omitting constructor, which contains logic to
// initialize the Users property based on constructor arguments
public string Name {get;set;}
public string LoginUrl {get;set;}
public string SiteUrl {get;set;}
public ValidatedCollection<CommunityUserModel, string> Users
{ get; set;}
}
public class CommunityUserModel
{
public string UserID {get; set;}
public string VaultKeyName {get;set;}
}
even though all the property names are "recognizable" by Automapper, and the two Users collections are both IEnumerable, the fact that the two collections are of different types apparently causes Automapper to treat the collections as "different".
And that apparently means the add/delete/copy logic of Automapper.Collections doesn't get used, even if present. Instead, Automapper tries to create an instance of (in this case) ValidatedCollection, populate it from the source object collection, and then assign it to destination object collection.
That's fine if ValidatedCollection<> doesn't have required contructor arguments. But it'll fail if it does. Which is what happened in my case.
My workaround was to do this in the Mapper definition:
CreateMap<Community, CommunityModel>()
.ForMember(dest=>dest.Users, opt=> opt.Ignore())
.AfterMap( ( src, dest, rc ) =>
{
foreach( var srcUser in src.Users )
{
dest.Users.Add(rc.Mapper.Map<CommunityUserModel>(srcUser));
}
} );
This keeps Automapper from doing anything with the destination Users property (which is initialized in the CommunityModel constructor), and maps over the source User objects after the "automatic" mapping is done.
Related
I have an parent class and two child like these:
public class Parent {
public Guid Id { get; set; }
public string Name { get; set; }
}
public class FirstChild {
public string IdentityCode { get; set; }
}
public class OtherChild {
public string RegistrationCode { get; set; }
}
There is a question: Is it a good approach to store these two inherited classes in the same Index inside ElasticSearch?
I see there is a _type property that is added to my docs after they are stored in DB but it has always "doc" value.
I test this code to fill it but it seems it is not working this way.
await ElasticClient.IndexAsync<FirstChild>(child, m => m.Index(IndexName));
And Also, I found this question on SO for retrieving my entries from DB but it is outdated and the API is changed and no more accessible.
I want to know if it is a good approach to store sibling data in the same index how can I do this properly.
As of ES 6.0, it is not possible anymore to store multiple types inside the same index, i.e. the _type field you're referring to will always be either doc or _doc. In ES 8.0, the _type field will be removed altogether.
However, if it makes sense for your use case, you can still decide to store several types inside a single index using a custom type field that is present in your document.
You should strive to only store in the same index data that share the same (or very similar) mapping, which doesn't seem to be the case for Parent, FirstChild and SecondChild, but if you add a public string type property to your classes you can still do it.
I have 2 models, one of which has a child collection of the other:
[Table("ParentTable")]
public class Parent
{
[Key, Column("Parent")]
public string Id { get; set; }
[Column("ParentName")]
public string Name { get; set; }
public virtual ICollection<Widget> Widgets { get; set; }
}
[Table("WidgetTable")]
public class Widget
{
public string Year { get; set; }
[Column("Parent")]
public string ParentId { get; set; }
public string Comments { get; set; }
[Key, Column("ID_Widget")]
public int Id { get; set; }
[ForeignKey("ParentId"), JsonIgnore]
public virtual Parent Parent { get; set; }
}
This code works for > 99% of widgets:
var parent = _dbContext.Parents.FirstOrDefault(p => p.Id == parentId);
Usually, parent.Widgets is a collection with more than one item. In a couple of instances, however, parent.Widgets is null (not a collection with no items).
I have used Query Analyzer to trace both the query for the parent and the query for widgets belonging to that parent. Both return exactly the rows I expect; however, the model for one or two parent IDs results in a null value for the Widgets collection. What could cause a lazy-loaded collection to be null in some instances but not others?
This situation commonly comes up when a dbContext lifetime is left open across an Add, saveChanges, and then retrieval.
For example:
var context = new MyDbContext(); // holding Parents.
var testParent = new Parent{Id = "Parent1", Name = "Parent 1"};
context.Parents.Add(testParent);
At this point if you were to do:
var result = context.Parents.FirstOrDefault(x=> x.ParentId == "Parent1");
you wouldn't get a parent. Selection comes from committed state.. So...
context.SaveChanges();
var result = context.Parents.FirstOrDefault(x=> x.ParentId == "Parent1");
This will return you a reference to the parent you had inserted since the context knows about this entity and has a reference to the object you created. It doesn't go to data state. Since your definition for Widgets was just defined with a get/set auto-property the Widgets collection in this case will be #null.
if you do this:
context.Dispose();
context = new MyDbContext();
var result = context.Parents.FirstOrDefault(x=> x.ParentId == "Parent1");
In this case the parent is not known by the new context so it goes to data state. EF will return you a proxy list for lazy loading the Widgets, which there are none so you get back an empty list, not #null.
When dealing with collection classes in EF it's best to avoid auto-properties or initialize them in your constructor to avoid this behaviour; you'll typically want to assign Widgets after creating a Parent. Initializing a default member is better because you don't want to encourage ever using a setter on the collection property.
For example:
private readonly List<Widget> _widgets = new List<Widget>();
public virtual ICollection<Widget> Widgets
{
get { return _widgets; }
protected set { throw new InvalidOperationException("Do not set the Widget collection. Use Clear() and Add()"); }
}
Avoid performing a Set operation on a collection property as this will screw up in entity reference scenarios. For instance, if you wanted to sort your Widget collection by year and did something like:
parent.Widgets = parent.Widgets.OrderBy(x=> x.Year).ToList();
Seems innocent enough, but when the Widgets reference was an EF proxy, you've just blown it away. EF now cannot perform change tracking on the collection.
Initialize your collection and you should avoid surprises with #null collection references. Also I would look at the lifetime of your dbContext. It's good to keep one initialized over the lifetime of a request or particular operation, but avoid keeping them alive longer than necessary. Context change tracking and such consume resources and you can find seemingly intermittent odd behaviour like this when they cross operations.
I have a source object which contains 2 references to the same collection. If I map the source type to a structurally-equivalent target type, AutoMapper will create two instances of the collection in the target instance.
class SourceThing
{
public string Name { get; set; }
public List<int> Numbers { get; set; }
public List<int> MoreNumbers { get; set; }
}
class TargetThing
{
public string Name { get; set; }
public List<int> Numbers { get; set; }
public List<int> MoreNumbers { get; set; }
}
If I create a SourceThing which has two references to the same List, map it to a TargetThing, the result is a TargetThing with two separate instances of the collection.
public void MapObjectWithTwoReferencesToSameList()
{
Mapper.CreateMap<SourceThing, TargetThing>();
//Mapper.CreateMap<List<int>, List<int>>(); // passes when mapping here
var source = new SourceThing() { Name = "source" };
source.Numbers = new List<int>() { 1, 2, 3 };
source.MoreNumbers = source.Numbers;
Assert.AreSame(source.Numbers, source.MoreNumbers);
var target = Mapper.Map<TargetThing>(source);
Assert.IsNotNull(target.Numbers);
Assert.AreSame(target.Numbers, target.MoreNumbers); // fails
}
Is this meant to be the default mapping behavior for concrete collections in AutoMapper? Through testing, I realized that if I mapped List<int> to List<int>, I achieve the behavior I want, but I don't understand why. If AutoMapper tracks references and doesn't re-map a mapped object, wouldn't it see that the source.MoreNumbers points to the same list as source.Numbers, and set the target accordingly?
I did some more research and tinkering. Internally, as the mapping engine walks the object graph, it chooses the best mapper for each source type/destination type. Unless there is a non-standard mapping (oversimplified), the engine will next look for a registered mapper for source and destination type. If it finds one, it creates the destination object, then traverses and maps all of the properties. It also places that destination object into the ResolutionContext.InstanceCache, which is a Dictionary<ResolutionContext, object>. If the same source object is encountered again in the same root mapping call, it'll pull the object from the cache, instead of wasting time to re-map.
However, if there is no registered mapper, the engine chooses the next applicable mapper, which in this case is the AutoMapper.Mappers.CollectionMapper. The collection mapper creates a destination collection, enumerates the source collection and maps each element. It does not add the destination object into the cache. This is clearly the design.
Resolution Context
What I find really interesting is how objects are cached in the InstanceCache. The key is the current ResolutionContext, which contains the source and destination type and the source value. ResolutionContext overrides GetHashCode() and Equals(), which use the underlying source value's same methods. I can define equality on a custom class such that a source collection with multiple equal but distinct instances of that class maps to a collection with multiple references to the same instance.
This class:
class EquatableThing
{
public string Name { get; set; }
public override bool Equals(object other)
{
if (ReferenceEquals(this, other)) return true;
if (ReferenceEquals(null, other)) return false;
return this.Name == ((EquatableThing)other).Name;
}
public override int GetHashCode()
{
return Name.GetHashCode();
}
}
Map a collection with 2 equal (but separate) things and the result is a collection with 2 pointers to the same thing!
public void MapCollectionWithTwoEqualItems()
{
Mapper.CreateMap<EquatableThing, EquatableThing>();
var thing1 = new EquatableThing() { Name = "foo"};
var thing2 = new EquatableThing() { Name = "foo"};
Assert.AreEqual(thing1, thing2);
Assert.AreEqual(thing1.GetHashCode(), thing2.GetHashCode());
Assert.AreNotSame(thing1, thing2);
// create list and map this thing across
var list = new List<EquatableThing>() { thing1, thing2};
var result = Mapper.Map<List<EquatableThing>, List<EquatableThing>>(list);
Assert.AreSame(result[0], result[1]);
}
Preserve References
I, for one, wonder why the default behavior of AutoMapper wouldn't be to map an object graph as closely as possible to the destination structure. N source objects results in N destination objects. But since it doesn't, I'd love to see an option on the Map method to PreserveReferences like a serializer would. If that option was picked, then every reference that is mapped is placed in a Dictionary using a reference equality comparer and source object for the key and the destination as the value. Essentially, if something is already mapped, the result object of that map is used.
There is nothing wrong with the behavior, it is just how automapper maps.
In the top section, you create a list of numbers and then apply it to a second list of numbers. You can then compare and they are the same because the object has 2 pointers to the same list. It did not copy the numbers, it simply made a new reference, just like you asked.
Now, move to the automapper. It runs through and maps from one object to an equivalent object. It maps each of the properties separately, copying the information. So, even though the source has morenumbers as a pointer to the same list, automapper maps each individually. Why? It is mapping properties, not examining the property pointers. And, you would not want it to do this in most instances.
Does this make sense?
If the ultimate goal is to get a test passing, the question is not "do numbers and more numbers point at the same object", but rather "do numbers and more numbers contain the exact same list". In the first instance, the answer to both is yes, as there is a single object (list) for numbers and more numbers. In the second, the answer is false, then true, as the list is equivalent, but it does not point to the same exact object.
If you truly want it to be the same object, you will have to play the game a bit differently. If you simply want to know if the list has the same elements, then change the assertion.
I'm only using Code Analysis for cleaning, organizing and ensuring these changes are globally performed for all instances of a particular warning. I'm down to the final, and it's CA2227.
CA2227 Collection properties should be read only Change '' to be
read-only by removing the property setter.
Note this is for mapping of EDI documents. These classes are to represent a whole or part of an EDI document.
public class PO1Loop
{
public SegmentTypes.PO1LoopSegmentTypes.PO1 PO1 { get; set; }
public Collection<SegmentTypes.PO1LoopSegmentTypes.PID1> PIDRepeat1 { get; set; }
public Collection<SegmentTypes.PO1LoopSegmentTypes.PID2> PIDRepeat2 { get; set; }
public SegmentTypes.PO1LoopSegmentTypes.PO4 PO4 { get; set; }
/* Max Use: 8 */
public Collection<SegmentTypes.PO1LoopSegmentTypes.ACK> ACKRepeat { get; set; }
}
You can see all of the Collection properties will give me this warning, and there are hundreds of them. When using the above class I instantiate it without any data. Then externally I add the data and set each individual variable through its public accessor. I do not instantiate this class with all the data prepared and passed using a constructor method (IMO for the size these can reach it can easily wreak havoc on the eyes). When complete and all properties are assigned the class as a whole is then used to generate that part of a document it represents.
My question is, for the usage described above, what would be a better approach for setting this up correctly? Do I keep the public accessors and suppress this warning entirely, or is there a entirely different solution that would work?
Here's what MSDN says about the error, and also how you can avoid it.
Here's my take on the issue.
Consider, the following class:
class BigDataClass
{
public List<string> Data { get; set; }
}
This class will throw that exact same issue. Why? Because Collections do not need a setter. Now, we can do anything with that object: assign Data to an arbitrary List<string>, add elements to Data, remove elements from Data, etc. If we remove the setter, we only lose the ability to directly assign to that property.
Consider the following code:
class BigDataClass
{
private List<string> data = new List<string>();
public List<string> Data { get { return data; } } // note, we removed the setter
}
var bigData = new BigDataClass();
bigData.Data.Add("Some String");
This code is perfectly valid and in fact the recommended way to do things. Why? Because the List<string> is a reference to a memory location, that contains the remainder of the data.
Now, the only thing you cannot now do with this, is directly set the Data property. I.e. the following is invalid:
var bigData = new BigDataClass();
bigData.Data = new List<string>();
This is not necessarily a bad thing. You'll notice that on many .NET types this model is used. It's the basics of immutability. You usually do not want direct access to the mutability of Collections, as this can cause some accidental behavior that has strange issues. This is why Microsoft recommends you omit setters.
Example:
var bigData = new BigDataClass();
bigData.Data.Add("Some String");
var l2 = new List<string>();
l2.Add("String 1");
l2.Add("String 2");
bigData.Data = l2;
Console.WriteLine(bigData.Data[0]);
We might be expecting Some String, but we'll get String 1. This also means that you cannot reliably attach events to the Collection in question, so you cannot reliably determine if new values are added or values are removed.
A writable collection property allows a user to replace the collection with a completely different collection.
Essentially, if you only ever need to run the constructor, or assignment, once, then omit the set modifier. You won't need it, direct assignment of collections is against best-practices.
Now, I'm not saying never use a setter on a Collection, sometimes you may need one, but in general you should not use them.
You can always use .AddRange, .Clone, etc. on the Collections, you only lose the ability of direct assignment.
Serialization
Lastly, what do we do if we wish to Serialize or Deserialize a class that contains our Collection without a set? Well, there is always more than one way to do it, the simplest (in my opinion) is to create a property that represents the serialized collection.
Take our BigDataClass for example. If we wished to Serialize, and then Deserialize this class with the following code, the Data property would have no elements.
JavaScriptSerializer jss = new JavaScriptSerializer();
BigDataClass bdc = new BigDataClass();
bdc.Data.Add("Test String");
string serd = jss.Serialize(bdc);
Console.WriteLine(serd);
BigDataClass bdc2 = jss.Deserialize<BigDataClass>(serd);
So, to fix this, we can simply modify our BigDataClass a bit to make it use a new string property for Serialization purposes.
public class BigDataClass
{
private List<string> data = new List<string>();
[ScriptIgnore]
public List<string> Data { get { return data; } } // note, we removed the setter
public string SerializedData { get { JavaScriptSerializer jss = new JavaScriptSerializer(); return jss.Serialize(data); } set { JavaScriptSerializer jss = new JavaScriptSerializer(); data = jss.Deserialize<List<string>>(value); } }
}
Another option is always the DataContractSerializer (which is really a better option, in general.) You can find information about it on this StackOverflow question.
With current VS2019 we can simply do this:
public List<string> Data { get; } = new List<string>();
This satisfies CA2227 and can be serialized/deserialized.
The deserialization works because List<> has an "Add" method, and the serializer knows how to handle a read-only collection property with an Add method (the property is read-only but not the elements) (I use Json.Net, other serializers may behave differently).
Edit:
As pointed out it should be "=" and not "=>" (compiler will prevent you using "=>"). If we used "public List Data => new List();" then it would create a new list every time the property was accessed which is not what we want either.
Edit:
Note that this will NOT work if the type of the property is an interface, such as IList
Edit:
I think the handling of interfaces is determined by the serializer used. The following works perfectly. I'm sure all common serializers know how to handle ICollection. And if you have some custom interface that does not implement ICollection then you should be able to configure the serializer to handle it, but in that case CA2227 probably won't be triggered making it irrelevant here. (As it is a read-only property you have to assign a concrete value within the class so it should always be serializing and de-serializing a non-null value)
public class CA2227TestClass
{
public IList Data { get; } = new List<string>();
}
[TestMethod]
public void CA2227_Serialization()
{
var test = new CA2227TestClass()
{
Data = { "One", "Two", "Three" }
};
var json = JsonConvert.SerializeObject(test);
Assert.AreEqual("{\"Data\":[\"One\",\"Two\",\"Three\"]}", json);
var jsonObject = JsonConvert.DeserializeObject(json, typeof(CA2227TestClass)) as CA2227TestClass;
Assert.IsNotNull(jsonObject);
Assert.AreEqual(3, jsonObject.Data.Count);
Assert.AreEqual("One", jsonObject.Data[0]);
Assert.AreEqual("Two", jsonObject.Data[1]);
Assert.AreEqual("Three", jsonObject.Data[2]);
Assert.AreEqual(typeof(List<string>), jsonObject.Data.GetType());
}
💡 Alternative Solution 💡
In my situation, making the property read-only was not viable because the whole list (as a reference) could change to a new list.
I was able to resolve this warning by changing the properties' setter scope to be internal.
public List<Batch> Batches
{
get { return _Batches; }
internal set { _Batches = value; OnPropertyChanged(nameof(Batches)); }
}
Note one could also use private set...
The hint's (achilleas heal) of this warning seems really pointed to libraries for the documentation says (Bolding mine):
An externally visible writable property is a type that implements
System.Collections.ICollection.
For me it was, "Ok, I won't make it viewable externally...." and internal was fine for the app.
Thanks to #Matthew, #CraigW and #EBrown for helping me understanding the solution for this warning.
public class PO1Loop
{
public SegmentTypes.PO1LoopSegmentTypes.PO1 PO1 { get; set; }
public Collection<SegmentTypes.PO1LoopSegmentTypes.PID1> PIDRepeat1 { get; private set; }
public Collection<SegmentTypes.PO1LoopSegmentTypes.PID2> PIDRepeat2 { get; private set; }
public SegmentTypes.PO1LoopSegmentTypes.PO4 PO4 { get; set; }
/* Max Use: 8 */
public Collection<SegmentTypes.PO1LoopSegmentTypes.ACK> ACKRepeat { get; private set; }
public PO1Loop()
{
PIDRepeat1 = new Collection<SegmentTypes.PO1LoopSegmentTypes.PID1>();
PIDRepeat2 = new Collection<SegmentTypes.PO1LoopSegmentTypes.PID2>();
ACKRepeat = new Collection<SegmentTypes.PO1LoopSegmentTypes.ACK>();
}
}
When wanting to assign data to the collection types use AddRange, Clear or any other variation of method for modifying a collection.
Only while binding DTO, you need to suppress warnings.
otherwise a custom ModelBinder is required custom ModelBinder to bind collections.
quoting the rule documentation:
When to suppress warnings
You can suppress the warning if the property is part of a Data Transfer Object (DTO) class.
Otherwise, do not suppress warnings from this rule.
https://learn.microsoft.com/pt-br/visualstudio/code-quality/ca2227?view=vs-2019
DTOs often require serialization and deserialization. Thus, they are required to be mutable.
Having to create an alternate backing property is a pain.
Simply change the property type from List<string> to IReadOnlyList<string> then this works as expected without CA2227.
The collection is set via the property but you can also cast to List<string> if you wish to append or delete items.
class Holder
{
public IReadOnlyList<string> Col { get; set; } = new List<string>();
}
var list = new List<string> { "One", "Two" };
var holder = new Holder() { Col = list } ;
var json = JsonConvert.SerializeObject(holder);
// output json {"Col":["One","Two"]}
var deserializedHolder = JsonConvert.DeserializeObject<Holder>(json);
I had to fix some of the CA2227 violations, so i had to add the "readonly" keyword to the collection field and then of course, had to remove the setter property. Some code that have used the setter, just created a new collection object which initially was empty. This code sure did not compile so i had to add a SetXxx() method in order to realize the missing setter's functionality. I did it like this:
public void SetXxx(List<string> list)
{
this.theList.Clear();
this.theList.AddRange(list);
}
The code of callers using the setter has been replaced with a call to the method SetXxx().
Instead of creating a complete new list, the existing list now will be cleared and filled with new items from another list, passed in as a parameter. The original list, due to the fact it is readonly and created only once, will always remain.
I believe this is also a good way to avoid that the garbagae collector has to delete old objects that got out of scope and second, to create new collection objects although there is already one.
As an addition to Der Kommissar's excellent answer.
Starting with .NET 5 (C# 9.0) there are init-only properties. These properties are only settable under specific circumstances, see here for reference.
The following example should not raise a warning CA2227, yet still allow for the collection being set during object initialization.
using System.Collections.Generic;
namespace BookStore
{
public class BookModel
{
public ICollection<string> Chapters { get; init; }
}
}
Note that the current version of the .NET SDK still raises a warning when using the built-in analyzer (not the NuGet package). This is a known bug and should be fixed in the future.
To cover all the possible scenarios to resolve CA2227 error:
This covers the Entity relationship mapping when we use Entity Framework.
class Program
{
static void Main(string[] args)
{
ParentClass obj = new ParentClass();
obj.ChildDetails.Clear();
obj.ChildDetails.AddRange();
obj.LstNames.Clear();
obj.LstNames.AddRange();
}
}
public class ChildClass
{ }
public class ParentClass
{
private readonly ICollection<ChildClass> _ChildClass;
public ParentClass()
{
_ChildClass = new HashSet<ChildClass>();
}
public virtual ICollection<ChildClass> ChildDetails => _ChildClass;
public IList<string> LstNames => new List<string>();
}
Suppose you have a class Person :
public class Person
{
public string Name { get; set;}
public IEnumerable<Role> Roles {get; set;}
}
I should obviously instantiate the Roles in the constructor.
Now, I used to do it with a List like this :
public Person()
{
Roles = new List<Role>();
}
But I discovered this static method in the System.Linq namespace
IEnumerable<T> Enumerable.Empty<T>();
From MSDN:
The Empty(TResult)() method caches an
empty sequence of type TResult. When
the object it returns is enumerated,
it yields no elements.
In some cases, this method is useful
for passing an empty sequence to a
user-defined method that takes an
IEnumerable(T). It can also be used to
generate a neutral element for methods
such as Union. See the Example section
for an example of this use of
So is it better to write the constructor like that? Do you use it? Why? or if not, Why not?
public Person()
{
Roles = Enumerable.Empty<Role>();
}
I think most postings missed the main point. Even if you use an empty array or empty list, those are objects and they are stored in memory. The Garbage Collector has to take care of them. If you are dealing with a high throughput application, it could be a noticeable impact.
Enumerable.Empty does not create an object per call thus putting less load on the GC.
If the code is in low-throughput location, then it boils down to aesthetic considerations though.
I think Enumerable.Empty<T> is better because it is more explicit: your code clearly indicates your intentions. It might also be a bit more efficient, but that's only a secondary advantage.
On the performance front, let's see how Enumerable.Empty<T> is implemented.
It returns EmptyEnumerable<T>.Instance, which is defined as:
internal class EmptyEnumerable<T>
{
public static readonly T[] Instance = new T[0];
}
Static fields on generic types are allocated per generic type parameter. This means that the runtime can lazily create these empty arrays only for the types user code needs, and reuse the instances as many times as needed without adding any pressure on the garbage collector.
To wit:
Debug.Assert(ReferenceEquals(Enumerable.Empty<int>(), Enumerable.Empty<int>()));
Assuming you actually want to populate the Roles property somehow, then encapsulate that by making it's setter private and initialising it to a new list in the constructor:
public class Person
{
public string Name { get; set; }
public IList<Role> Roles { get; private set; }
public Person()
{
Roles = new List<Role>();
}
}
If you really really want to have the public setter, leave Roles with a value of null and avoid the object allocation.
The problem with your approach is that you can't add any items to the collection - I would have a private structure like list and then expose the items as an Enumerable:
public class Person
{
private IList<Role> _roles;
public Person()
{
this._roles = new List<Role>();
}
public string Name { get; set; }
public void AddRole(Role role)
{
//implementation
}
public IEnumerable<Role> Roles
{
get { return this._roles.AsEnumerable(); }
}
}
If you intend some other class to create the list of roles (which I wouldn't recommend) then I wouldn't initialise the enumerable at all in Person.
The typical problem with exposing the private List as an IEnumerable is that the client of your class can mess with it by casting. This code would work:
var p = new Person();
List<Role> roles = p.Roles as List<Role>;
roles.Add(Role.Admin);
You can avoid this by implementing an iterator:
public IEnumerable<Role> Roles {
get {
foreach (var role in mRoles)
yield return role;
}
}