I'm trying to have a simple one to many relationship/hierarchy using NHibernate. I would like orphans to be deleted automatically but my current attempts to do so all result in an ObjectDeletedException. I'm wondering if someone can tell me what I'm doing incorrectly.
EDIT:
I should have specified that I'm loading a root Foo, then removing a child outside of the session, causing one or more children to be orphaned. The exception is occurring when I subsequently call SaveOrUpdate(root) in a second session. How can I rectify the difference in the list of children between the detached and modified object vs the object that is persisted in the database?
Sample code in question looks something like this:
Foo foo = new Foo();
Foo child1 = new Foo();
Foo child2 = new Foo();
foo.Children.Add(child1);
child1.Children.Add(child2);
// session #1
session.SaveOrUpdate(foo);
// so far, so good
// outside of any session
foo.Children.Clear();
// session #2
PutFoo(foo); // results in ObjectDeletedException
The object being persisted:
class Foo
{
private IList<Foo> children = new List<Foo> children;
public virtual int Id { get; private set; }
public IList<Foo> Children
{
get { return children; }
set { children = value; }
}
}
The FluentNHibernate mapping:
class FooMap : ClassMap<SyncDir>
{
public FooMap()
{
Id(x => x.Id);
base.HasMany(x => x.Children).Cascade.AllDeleteOrphan();
}
}
The method used to persist an object of type Foo:
void PutFoo(Foo foo)
{
using (var session = factory.OpenSession())
using (var transaction = factory.BeginTransaction())
{
session.SaveOrUpdate(foo);
transaction.Commit();
}
}
What I always do, is create a bidrectional relationship.
So, this means that the Children have a reference to their parent.
When removing a child from the collection, I also set the reference to the parent to NULL.
In the mapping , you then also have to indicate the 'inverse' end of the relationship.
I also never expose the collection 'as is' outside the class.
Thus, I mostly do it like this:
public class Foo
{
private ISet<Bar> _bars = new HashSet<Bar>();
public ReadOnlyCollection<Bar> Bars { get return new List<Bar>(_bars).AsReadOnly(); }
public void AddBar( Bar b )
{
b.Parent = this;
_bars.Add (b);
}
public void RemoveBar( Bar b )
{
b.Foo = null;
_bars.Remove (b);
}
}
public class Bar
{
public Foo Parent { get; set; }
}
Then, in the mapping I set the 'inverse' end on the collection.
So, this means that in my mapping (I still use xml files to specify the mapping), I set the inverse=true attribute on the Bars collection of the Foo class.
Related
I have two objects that are similar but not quite the same:
public class ObjectA
{
public string PropertyA { get; set; }
}
public class ObjectB
{
public string PropertyA { get; set; }
public string PropertyB { get; set; }
}
In both objects PropertyA is used for database lookup via a stored procedure, however, the stored procedure used differs between the objects. Assuming the stored procedure returns successfully, the response is used to create the relevant object.
Currently I have two methods that perform the same logic of calling the relevant stored procedure and then creating the object:
public ObjectA QueryObjectA(string propertyA)
{
// shared code
var result = CallToStoredProcedureA(...);
var obj = new ObjectA
{
// use result here
};
return obj;
}
public ObjectB QueryObjectB(string propertyA)
{
// shared code
var result = CallToStoredProcedureB(...);
var obj = new ObjectB
{
// use result here
};
return obj;
}
This works, however, I am maintaining some shared code in multiple places and if I were to add another object in the future I would potentially have a third method (QueryObjectC).
Can this, or should this, be improved through the use of generics and if so what might that look like? Perhaps something along the lines of the following?
public T QueryObject<T>(string propertyA) where T : new()
{
// shared code
// get type of T
// switch statement that calls the relevant stored procedure and creates the object
// return object
}
So either way, you're probably going to want to create a common interface between A and B. We'll call it IBase
Now the simplest solution would be to try move this procedure to inside IBase, and let polymorphism do the hard work for you
public T QueryObject<T>() where T: IBase, new()
{
var obj = new T();
var result = obj.DoProcedure(...);
obj.AssignResults(result); // could be a part of DoProcedure
return obj;
}
with
interface IBase
{
ResultType DoProcedure(...);
}
and each A or B would impliment the correct call (possibly using a visitor pattern if needed)
If however this procedure really isn't the responsibility of A or B (from the looks of it it could be, but if it's not) then you can upgrade to the slightly more heavy duty solution of a Factory of some kind.
You'll want a similar interface as before, but this one you can leave empty
interface IBase
{ }
And a IProcedureExecutor interface (name can be improved
interface IProcedureExecutor
{
ResultType DoProcedure(...);
}
The implimemtors would each impliment the function, ATypeProcExecutor : IProcedureExexutor, BTypeProcExecutor : IProcedureExexutor
Then a Factory
class ProcedureExecutorFactory
{
private static Dictionary<Type, IProcedureExecutor> executors = new
{
{typeof(A), new ATypeProcExecutor},
{typeof(B), new BTypePrpcExecutor}
};
public static IProcedureExecutor GetExecutor(Type t)
{
return executors[t];
}
}
Then your QueryObject function would be
public T QueryObject<T>() where T: IBase, new()
{
var executor = ProcedureExecutorFactory.GetExecutor(typeof(T));
var result = executor.DoProcedure(...);
var obj = new T();
obj.AssignResults(result);// could be part of constructor
return obj;
}
The second is probably more SOLID but might well be overkill for your problem.
You mentioned a switch statement in your question's sudocode, this is a bad idea. A generic function that switches based on the type of T is really loosing all the flexibility benifits gained by using a generic at all.
Wanting to do something different based on the type of an object is really the heart of OOP, so making good use of polymorphism is really the way to go here.
(on my phone so code untested)
By using a common abstract base class ObjectBase having the property PropertyA you could write
public T QueryObject<T>(string propertyA) where T : ObjectBase, new()
{
var result = typeof(T).Name switch {
nameof(ObjectA) => CallToStoredProcedureA(...),
nameof(ObjectB) => CallToStoredProcedureB(...),
_ => throw new NotImplementedException()
};
var obj = new T {
PropertyA = result
};
return obj;
}
If the initialization is different for the different object types, then you could consider adding a static factory method to these types.
Objects with factory methods:
public class ObjectA : ObjectBase
{
public static ObjectA Create(string storedProcResult) =>
new() { PropertyA = storedProcResult };
}
public class ObjectB : ObjectBase
{
public string PropertyB { get; set; }
public static ObjectB Create(string storedProcResult) =>
new() {
PropertyA = storedProcResult.Substring(0, 3),
PropertyB = storedProcResult.Substring(3)
};
}
Generic method using the factory methods:
public T QueryObject<T>(string propertyA) where T : ObjectBase, new()
{
ObjectBase obj = typeof(T).Name switch {
nameof(ObjectA) => ObjectA.Create(CallToStoredProcedureA(...)),
nameof(ObjectB) => ObjectB.Create(CallToStoredProcedureB(...)),
_ => throw new NotImplementedException()
};
return (T)obj;
}
So, I have a list of children on my parent object, and I want to persist them on my SQL Server. When I run the application for the first time, all the children get their FK correctly, but when I run it again and no new parent is added, the new child(of an existing parent) doesn't get it's parent FK, just NULL. How can I map the parent FK on my child mapping for those situations?
I've tried the Inverse() method, but as I need the parent key to be generated all children gets null anyway. I need something like, if the parent is new, then the parent will update it's children FK, but when only the child is new I would need it to do the Inverse() method, is it possible?
Some more info:
Every time I call the ParentPersist method, and it cascades as needed. I've added the AddChild() method to set the ParentId when a new child is added to the list, it's working as I debugged it, so the child is setting it's ParentId correctly.
The objects are like the following:
public class Parent
{
public virtual int Id { get; set; }
...
public virtual IList<Child> Children{ get; set; }
public virtual void AddChild(Child ch)
{
ch.IdParent = this.Id;
Children.Add(ch);
}
}
public class Child
{
public virtual int Id { get; set; }
...
public virtual int IdParent {get;set;}
}
And my mapping:
public class ParentMapping : ClassMap<Parent>
{
public ParentMapping ()
{
Id(cso => cso.Id).GeneratedBy.Identity();
...
HasMany(cso => cso.Children).KeyColumn("IdParent").Cascade.SaveUpdate().Not.LazyLoad();
}
}
public class ChildMapping : ClassMap<Child>
{
public ChildMapping ()
{
Id(cso => cso.Id).GeneratedBy.Identity();
...
}
}
Your logic (e.g. Add() method in Parent, Inverse() mapping) was OK. You were almost there. There is only one BUT...
In general, the proper (if not only correct) solution is to use objects to express realtion and not just the ValueType/int values. That's why we call it ORM - Object-relational mapping
Object in C# should look like this:
public class Parent
{
...
// correct mapping of the children
public virtual IList<Child> Children{ get; set; }
// this method uses the below updated Child version
public virtual void AddChild(Child ch)
{
// this is replaced
// ch.IdParent = this.Id;
// with this essential assignment
ch.Parent = this;
Children.Add(ch);
}
}
public class Child
{
...
// instead of this
// public virtual int IdParent {get;set;}
// we need the reference expressed as object
public virtual Parent Parent { get; set; }
}
So, now, once we have objects in place, we can adjust the mapping like this:
// parent
public ParentMapping ()
{
...
HasMany(cso => cso.Children)
.KeyColumn("IdParent")
.Inverse() // this is essential for optimized SQL Statements
.Cascade.SaveUpdate() // All delete orphan would be better
.Not.LazyLoad();
}
...
// Child
public ChildMapping ()
{
...
References(x => x.Parent, "IdParent"); // it is a to use Inverse()
}
With this Business Domain Model and the mapping (Inverse(), assigning bothe relation ends in Add() method...), NHibernat will have enough information to always (insert, update) issue proper SQL statements
NOTE: One could ask why to map Parent Parent { get; set; } and not just the int IdParent { get; set; }... In fact, if we would have existing Parent (with NOT transient ID, i.e. > 0) - there won't be any difference. The trick/problems would appear on a new Parent insertion. Almost always, assignement of the children comes before the Parent is persiseted (flushed), and its ID is recieved from DB (sql server identity). And that could/would cause the child.IdParent == 0 ...
We should remember, that in general - ORM is about objects, i.e. relation is represented by Reference types.
I have a readonly List so I can hide the Add method from other classes, like this:
class Foo
{
private readonly List<Bar> _Bars = new List<Bar>;
public()
{
this.Bars = _Bars.AsReadOnly();
}
public ReadOnlyCollection<Bar> Bars
{
get;
private set;
}
public void AddBar(Vector Dimensions)
{
_Bars.Add(new Bar(Dimensions));
}
}
The thing is, now I want to order the _Bars field of an instance of Foo, like such:
public void OrderBarsByVolume()
{
_Bars.OrderByDescending(o => o.Volume); //Doesn't do anything
_Bars = _Bars.OrderByDescending(o => o.Volume).ToList(); //Error: A readonly field cannot be assigned to
}
Is it possible to use orderby and keep the add feature of the List hidden from other classes?
Use List<T>.Sort method
_Bars.Sort((x,y) => x.Volume.CompareTo(y.Volume));
Not with your current implementation, however, if you adjust things slightly then yes you can. The idea of "hiding" the underlying data means you don't have to hold it internally as read only but rather expose it as read only
private List<Bar> _Bars = new List<Bar>();
public ReadOnlyCollection<Bar> Bars
{
get { return _Bars.AsReadOnly(); }
}
public void OrderBy(Func<Bar, bool> src)
{
_Bars = _Bars.OrderByDescending(src);
}
...
var foo = new Foo();
foo.OrderBy(x => x.Volume);
If you feel creating a new ReadOnlyCollection each time is too expensive then keep your code as it is but simply remove the readonly modifier
private List<Bar> _Bars = new List<Bar>;
public void OrderBy(Func<Bar, bool> src)
{
_Bars = _Bars.OrderByDescending(src).ToList();
}
Add a public method that will do the ordering within the Foo object.
Even if James gave you some good tips, there are still some open issues.
So let's start with your implementation:
private readonly List<Bar> _Bars = new List<Bar>;
This won't make the list itself read-only. Still it is possible to add, remove an item or to clear the entire list. The keyword readonly only ensure that you can't replace the whole list by a completely different list.
So what you like, is that within your class you have full access to the list (so Foo can add, remove, sort items), but anybody who requested the list, can only read this list. The open question here would be what should happen if someone requested the list and afterwards the list was changed from Foo. Should the already outgiven list reflect theses changes or not? Mostly you like this behaviour, but it really depends on what you like to achieve.
Here is my code example that should solve most of your problems:
internal class Foo
{
// The list which can be manipulated only be Foo itself.
private List<Bar> _Bars;
// The proxy that will be given out to the consumers.
private ReadOnlyCollection<Bar> _BarsReadOnly;
public Foo()
{
// Create the mutable list.
_Bars = new List<Bar>();
// This is a wrapper class that holds a
// reference to the mutable class, but
// throws an exception to all change methods.
_BarsReadOnly = _Bars.AsReadOnly();
}
public IReadOnlyList<Bar> Bars
{
// Simply give out the wrapper.
get { return _BarsReadOnly; }
}
public void AddBar(Vector dimensions)
{
// Manipulate the only intern available
// changeable list...
_Bars.Add(new Bar(dimensions));
}
public void SortBars()
{
// To change the order of the list itself
// call the Sort() method of list with
// a comparer that is able to sort the list
// as you like.
_Bars.Sort(BarComparer.Default);
// The method OrderBy() won't have any
// immediate effect.
var orderedList = _Bars.OrderBy(i => i.Volume);
// That's because it will just create an enumerable
// which will iterate over your given list in
// the desired order, but it won't change the
// list itself and so also not the outgiven wrappers!
}
}
To use the Sort() method of the list class you need an comparer but that's quite easy to implement:
internal class BarComparer : IComparer<Bar>
{
public static BarComparer Default = new BarComparer();
public int Compare(Bar x, Bar y)
{
if (ReferenceEquals(x, y))
return 0;
if (ReferenceEquals(x, null))
return -1;
if (ReferenceEquals(y, null))
return 1;
return x.Volume.CompareTo(y.Volume);
}
}
I hope this gives you a little more enlightenment about how stuff in C# works.
Let the callers handle the sorted list:
public IEnumerable<Bars> OrderedBars(Func<Bar, bool> sortMethod)
{
return _Bars.OrderBy(sortMethod);
}
If you really want to keep the sorted bars to yourself, you could create a immutable class where ordering the bars create a new instance of Foo which will then either replace the current one or be used by the caller, something like that:
public Foo OrderBarsByVolume()
{
return new Foo() {_Bars = this._Bars.OrderByDescending(o => o.Volume)}
}
If I have an object with nothing but private properties such as
public class Foo
{
private int Id { get; set; }
private string Bar { get; set; }
private string Baz { get; set; }
}
and store it in Raven, it will store those properties and everything works like magic. If I want to do some sort of read-only query off of the collection, how would I go about doing so using an index? (I'm actually open to any solution, even if it doesn't use indices.)
Obviously, something like this will not work because of the private access (and dynamic cannot be used in an expression tree):
public class Foo_LineItems : AbstractIndexCreationTask<Foo, FooLineItem>
{
public Foo_LineItems ()
{
Map = foos => foos.Where (x => x.Baz == null)
.Select (x => new { x.Id, x.Bar });
}
}
I'm sure I have overlooked something, but have been searching the web and cannot find anything that answers this specific question. The obvious answer is to segregate the reads and writes, using CQRS, and not actually persist the raw domain object. (This is just an experiment with Raven and CQS.)
We have untyped API for doing this:
public class Foo_LineItems : AbstractIndexCreationTask
{
public override IndexDefinition CreateIndexDefinition()
{
return new IndexDefinition
{
Map = #"
from foo in docs.Foos
where foo.Baz == null
select new { foo.Id, foo.Bar }
"
};
}
}
This is pretty closely related to another SO question.
Using the example below, could someone explain to me why adding a new List<Foo> where each of Foo's properties are explicitly set causes the ApplicationSettingsBase.Save() method to correctly store the data, whereas adding a new Foo to the list via a constructor (where the constructor sets the property values) does not work? Thanks!
public class Foo
{
public Foo(string blah, string doh)
{
this.Blah = blah;
this.Doh = doh;
}
public Foo() { }
public string Blah { get; set; }
public string Doh { get; set; }
}
public sealed class MySettings : ApplicationSettingsBase
{
[UserScopedSetting]
public List<Foo> MyFoos
{
get { return (List<Foo>)this["MyFoos"]; }
set { this["MyFoos"] = value; }
}
}
// Here's the question...
private void button1_Click(object sender, EventArgs e)
{
MySettings mySettings = new MySettings();
// Adding new Foo's to the list using this block of code doesn't work.
List<Foo> theList = new List<Foo>()
{
new Foo("doesn't","work")
};
// But using this block of code DOES work.
List<Foo> theList = new List<Foo>()
{
new Foo() {Blah = "DOES", Doh = "work"}
};
// NOTE: I never ran both the above code blocks simultaneously. I commented
// one or the other out each time I ran the code so that `theList` was
// only created once.
mySettings.MyFoos = theList;
mySettings.Save();
}
This might be due to how you constructed your example. But using the given code, the "doesn't work" list is getting removed when you do the "does work" section. If you want both elements to be in theList at the end of the method, you can only have one new List<Foo>() call.
I stumbled upon the answer while trying to clarify my question just now. If I supply a default constructor to the Foo class:
public Foo() { }
--leaving everything else the same--then the values of the class are correctly stored in the user.config file when ApplicationSettingsBase.Save() is executed. Weird.