Our project has a framework and an own code base, which implements entities of the framework.
The idea is also to have several indexes in the framework that will return the results of all inheritances of certain framework types like user.
Apparantly, the only way Raven supports this (without creating the index on the highest level and manually adding maps) is to store all objects in the same collection by overwriting the Clr-type. However, this means we lose the derived information and cannot query on it.
Some samples:
class A {
public string Id {get; set; },
public string Name {get; set; }
}
class B : A { }
class C : A { }
class D : C { }
Then I want to query something along the lines of:
store.Query<IndexOfA>().Where(a => a.Name == "foo").As<A>().ToList();
AND be able to still do this;
store.Query<IndexOfC>().As<C>().ToList()
My idea was to add a convention that saves both the derived and the base class to the metadata when storing documents in RavenDB, but I have no idea how to go about this and cannot find any documentation on the subject.
Any ideas?
You can create an index that used a multi map index with AddMapForAll<Base>, which will generate a separate entry for each derived class.
You could then use that index to do polymorphic queries.
Related
I am using Entity Framework to query a db which is defined by a model: inside this model I have several classes having a #region dynamic values:
[DataContract]
public class Job : AbstractEntity, IJob
{
[DataMember]
public virtual Guid Id { get; set; }
...
#region dynamic values
[DataMember]
public virtual string MetadataValue { get; set; }
[DataMember]
public virtual string ParametersValue { get; set; }
[DataMember]
public virtual string AttributesValue { get; set; }
#endregion
#region links
...
#endregion
}
AttributesValue, MetadataValue and ParametersValue are declared as string but are stored inside the db as XML documents. I am aware that this is not consistent with the model and should be changed, but for some reasons it has been managed this way and I am not allowed to modify it.
I have created a Unit Test in order to better handle the problem, and here is the code:
public class UnitTest1
{
private ModelContext mc;
[TestInitialize]
public void TestInit()
{
IModelContextFactory mfactory = ModelContextFactory.GetFactory();
mc = mfactory.CreateContextWithoutClientId();
}
[TestMethod]
public void TestMethod1()
{
DbSet<Job> jobs = mc.Job;
IQueryable<string> query = jobs
.Where(elem => elem.AttributesValue == "<coll><item><key>ids:ui:description</key><value>Session Test</value></item><item><key>ids:all:type</key><value>signature</value></item></coll>")
.Select(elem => elem.AttributesValue);
List<string> attrs = new List<string>(query);
foreach (string av in attrs)
{
Console.WriteLine(av ?? "null");
}
Assert.AreEqual(1, 1);
}
}
A quick explanation about the TestInit and ModelContext:
ModelContext inherit from DbContext and is an abstract class implemented by SqlModelContext and OracleModelContext (both override OnModelCreating). Depending on the connection string, CreateContextWithoutClientId return a SqlModelContext or an OracleModelContext. Summary: a Factory pattern.
Let's get down to brass tacks: the TestMethod1.
The problem here is in the Where method and the error returned is, as expected:
SqlException: The data types nvarchar and xml are incompatible in the equal to operator.
(From now on I will only consider the AttributesValue property)
I thought of some possible solutions, which are:
Creating a new property inside the model (but not mapped to the db) and use it as a "proxy" instead of accessing directly AttributesValue. However only mapped properties can be used in Linq, so I discarded it.
Operating directly on the inner SQL query generated by the IQueryable and using a customized CAST for Oracle and Sql Server db. I'd rather avoid go for this for obvious reasons.
Is there a way to specify a custom Property Getter so that I can cast AttributesValue to string before it is accessed? Or maybe some configuration on the DbModelBuilder?
I'm using standard Entity Framework 6, Code-First approach.
There is no standard xml data type or standard canonical function for converting string to xml or vice versa.
Fortunately EF6 supports the so called Entity SQL Language which supports an useful construct called CAST:
CAST (expression AS data_type)
The cast expression has similar semantics to the Transact-SQL CONVERT expression. The cast expression is used to convert a value of one type into a value of another type.
It can be utilized with the help of the EntityFramework.Functions package and Model defined functions.
Model defined functions allow you to associate Entity SQL expression with user defined function. The requirement is that the function argument must be an entity.
The good thing about Entity SQL operators is that they are database independent (similar to canonical functions), so the final SQL is still generated by the database provider, hence you don't need to write separate implementations for SqlServer and Oracle.
Install the EntityFramework.Functions package through Nuget and add the following class (note: all the code requires using EntityFramework.Functions;):
public static class JobFunctions
{
const string Namespace = "EFTest";
[ModelDefinedFunction(nameof(MetadataValueXml), Namespace, "'' + CAST(Job.MetadataValue AS String)")]
public static string MetadataValueXml(this Job job) => job.MetadataValue;
[ModelDefinedFunction(nameof(ParametersValueXml), Namespace, "'' + CAST(Job.ParametersValue AS String)")]
public static string ParametersValueXml(this Job job) => job.ParametersValue;
[ModelDefinedFunction(nameof(AttributesValueXml), Namespace, "'' + CAST(Job.AttributesValue AS String)")]
public static string AttributesValueXml(this Job job) => job.AttributesValue;
}
Basically we add simple extension method for each xml property. The body of the methods doesn't do something useful - the whole purpose of these methods is not to be called directly, but to be translated to SQL when used inside LINQ to Entities query. The required mapping is provided through ModelDefinedFunctionAttribute and applied via package implemented custom FunctionConvention. The Namespace constant must be equal to typeof(Job).Namespace. Unfortunately due to the requirement that attributes can use only constants, we can't avoid that hardcoded string as well as the entity class / property names inside the Entity SQL string.
One thing that needs more explanation is the usage of '' + CAST. I wish we could use simply CAST, but my tests show that SqlServer is "too smart" (or buggy?) and removes the CAST from expression when used inside WHERE. The trick with appending the empty string prevents that behavior.
Then you need to add these functions to entity model by adding the following line to your db context OnModelCreating override:
modelBuilder.AddFunctions(typeof(JobFunctions));
Now you can use them inside your LINQ to Entities query:
IQueryable<string> query = jobs
.Where(elem => elem.AttributesValueXml() == "<coll><item><key>ids:ui:description</key><value>Session Test</value></item><item><key>ids:all:type</key><value>signature</value></item></coll>")
.Select(elem => elem.AttributesValue);
which translates to something like this in SqlServer:
SELECT
[Extent1].[AttributesValue] AS [AttributesValue]
FROM [dbo].[Jobs] AS [Extent1]
WHERE N'<coll><item><key>ids:ui:description</key><value>Session Test</value></item><item><key>ids:all:type</key><value>signature</value></item></coll>'
= ('' + CAST( [Extent1].[AttributesValue] AS nvarchar(max)))
and in Oracle:
SELECT
"Extent1"."AttributesValue" AS "AttributesValue"
FROM "ORATST"."Jobs" "Extent1"
WHERE ('<coll><item><key>ids:ui:description</key><value>Session Test</value></item><item><key>ids:all:type</key><value>signature</value></item></coll>'
= ((('')||(TO_NCLOB("Extent1"."AttributesValue")))))
Using MongoDB as my data store makes me to have ObjectID type as primary key by Default. It also can be changed by using Guid with [BsonId] attribute. Which is also defined in MongoDB C# Driver library. I would like to have my Entities independent from Data layer.
Can I just use name Id for the property to identify primary key? What else I can try?
You can use BsonClassMap instead of using attributes to keep your classes "clean".
// 'clean' entity with no mongo attributes
public class MyClass
{
public Guid Id { get; set; }
}
// mappings in data layer
BsonClassMap.RegisterClassMap<MyClass>(cm =>
{
cm.AutoMap();
cm.MapIdMember(c => c.Id).SetIdGenerator(CombGuidGenerator.Instance);
});
OPTION 1: Stick with BsonId and use the Facade Pattern
The [BsonId] property is what you'd use to indicate that the _id property should be linked to a specific property. There isn't a way around that (short of ignoring _id entirely in your crud operations which seems like a bad idea).
So, if you want to separate your "entity" object from your "data layer" then just use a poco class.
-- Use a poco class as a substitute for a record. That class is only for data storage: a quick way to get data in/out of mongo, and a great alternative to working with bson documents.
-- Use a facade on top of that poco class for your entity layer. I don't find it useful to re-invent the wheel, so I typically ask our devs have the entity interface inherit the data-layer (poco) interface, but you can do it however you'd like
Breaking up a sample MyObject class
IMyObjectRecord (declared at the dal and contains only properties and mongo-specific attributes)
IMyObject:IMyObjectRecord (declared at the entity level and may include added properties and methods)
MyObjectRecord:IMyObjectRecord (declared inside the dal, contains mongo-specific attributes. Could be declared internal if you wanted to be really strict about separation).
MyObject:IMyObject (could be, for example, a facade on top of the IMyObjectRecord class you pull from the dal).
Now - you get all the benefits of the facade, and you have a hard-coded link between the properties BUT, you get to keep Bson attributes contained in your dal.
OK, fine. But I really really really HATE that answer.
Yeah. I can accept that. OK, so how about a Convention Pack? If you ABSOLUTELY PROMISE that you'll call your Id's "Id" and you SWEAR that you'll type them as strings (or -- use some other convention that is easy to identify), then we could just use a convention pack like the one I stole from here
namespace ConsoleApp {
class Program {
private class Foo {
// Look Ma! No attributes!
public string Id { get; set; }
public string OtherProperty { get; set; }
}
static void Main(string[] args) {
//you would typically do this in the singleton routine you use
//to create your dbClient, so you only do it the one time.
var pack = new ConventionPack();
pack.Add(new StringObjectIdConvention());
ConventionRegistry.Register("MyConventions", pack, _ => true);
// Note that we registered that before creating our client...
var client = new MongoClient();
//now, use that client to create collections
var testDb = client.GetDatabase("test");
var fooCol = testDb.GetCollection<Foo>("foo");
fooCol.InsertOne(new Foo() { OtherProperty = "Testing", Id="TEST" });
var foundFoo = fooCol.Find(x => x.OtherProperty == "Testing").ToList()[0];
Console.WriteLine("foundFooId: " + foundFoo.Id);
}
//obviously, this belongs in that singleton namespace where
//you're getting your db client.
private class StringObjectIdConvention : ConventionBase, IPostProcessingConvention {
public void PostProcess(BsonClassMap classMap) {
var idMap = classMap.IdMemberMap;
if (idMap != null && idMap.MemberName == "Id" && idMap.MemberType == typeof(string)) {
idMap.SetIdGenerator(new StringObjectIdGenerator());
}
}
}
}
}
What's a Convention Pack
It's a little set of mongo "rules" that get applied during serialize/deserialize. You register it once (when you setup your engine). In this case, the sample pack is telling mongo "if you see a field called 'Id', then save it as a string to _id, please."
These can get really complex and fun. I'd dig into convention packs if you really really really hate the other approach. It's a good way to force all your mongo "attribute driven" logic into one self-contained location.
I have stumbled on the same problem myself, and I didn't want to have mongo attributes inside my classes.
I have created a small wrapper example to show how I save and find elements without having an Id property on the data classes of my business logic.
The wrapper class:
public static class Extensions
{
public static T Unwrap<T>(this MongoObject<T> t)
{
return t.Element;
}
}
public class MongoObject<T>
{
[BsonId]
private ObjectId _objectId;
public T Element { get; }
public MongoObject(T element)
{
Element = element;
_objectId = new ObjectId();
}
}
I have also added an extension method to easily unwrap.
Saving an element is simple
public void Save<T>(T t)
{
_collection.InsertOne(new MongoObject<T>(t));
}
To find an element we can do a linq-like query:
Say we have a data class:
public class Person
{
public string Name { get; set; }
}
then we can find such an element by
public Person FindPersonByName(string name)
{
return _collection.AsQueryable().FirstOrDefault(
personObject => personObject.Element.Name == name).Unwrap();
}
We can also generalize by making MongoObject implement IQueryable<T> and this would make the use of the wrapper even more convenient
If i understand correctly. You want to put your entity to other layer without attribute.
I think you can try this
public object Id { get; set; }
after that you can put your Id which is coming from mongodb without attribute
I'm trying to create a datastructure with entity framework to basically store property values of my objects. I want users to add properties to a class at runtime. The properties can be of different datatypes. (string/int/float etc..)
So I thought I needed some tables/classes as defined in the image below.
So my Object class contains a list of properties that are of a type defined in de propertydefinition class.
One hard thing is that values are stored in the table of the datatype of the propertie. (So a conditional foreignKey?)
Please give me some pointers on how to implement this by using Fluent API. Or other ideas on this subject. (I guess I won't be the first ;)
Werner
The EF entity model cannot be changed during Runtime (or at least is not designed for). You could use an infrastructure to store propertyname/propertyvalye with EF but I think is not the right choice (you lose most of the functionalities).
The best choice could be a NoSQL db, ADO.Net or, if only some objects can be personalized and other are fixed you could store the personalizable objects in XML/JSON in a text field.
I found this link
This helped me solve my "Table Per Type" question. I now have:
public abstract class PropertyBase
{
public int PropertyID { get; set; }
public string Name { get; set; }
}
public class TextProperty : PropertyBase
{
public string Value { get; set; }
}
public class IntProperty : PropertyBase
{
public int Value { get; set; }
}
In My Database Context I added:
modelBuilder.Entity<PropertyBase>()
.HasKey(p => p.PropertyID)
.ToTable("Properties");
modelBuilder.Entity<IntProperty>()
.ToTable("IntProperties");
modelBuilder.Entity<TextProperty>()
.ToTable("TextProperties");
The different types of properties (sub classes) are now stored in separate tables. The main abstract class contains all the other info. This worked fine for me.
In a DDD approach, I have a Domain Model (DM), with a rich behaviour. Suppose I have a root entity, called Order and relative LineOrder. The exposed collection of LineOrder need to be a IReadOnlyCollection since none can alter the collection arbitrarily. In code:
public class Order : AggregateRoot {
// fields
private List<LineOrder> lineOrder;
// ctors
private Order() {
this.lineOrder = new List<LineOrder>();
// other initializations
}
// properties
public IReadOnlyCollection<LineOrder> LineOrder {
get
{
return lineOrder.AsReadOnly();
}
}
// behaviours
}
So far, so good. But when I want to persist this domain I have some technology restrictions imposed by Entity Framework (a key is needed even if I have a value object, a parameterless constructor and so on) that is not a perfect match with a DDD approach.
Another limitation that I have is:
public class OrderConfiguration : EntityTypeConfiguration<Order>
{
public OrderConfiguration()
{
ToTable("Order");
HasMany<LineOrder>(m => m.LineOrder); // Exception: Cannot convert from IReadOnlyCollection to ICollection
}
}
I cannot cast IReadOnlyCollection to ICollection (incidentally, if LineOrder was an ICollection everything was OK!).
For the reasons I have expressed above: could be usefull in this case create a Persistence Model (with belonging cons: mapping DM/PM and viceversa)?
Are there an alternative? And, above all: are there an alternative that well fit a DDD approach?
Have you tried declaring the LineOrder collection as protected? This way EF has access but consumers do not.
// properties
protected ICollection<LineOrder> LineOrder { get; set; }
You can then expose this collection in a read-only manner to the end user with:
public IReadOnlyCollection<LineOrder> ReadOnlyLineOrder
{
get
{
return LineOrder.ToList().AsReadOnly();
}
}
I'm having difficulty using AutoMapper to convert Object from Nhibernate queries into my DTO in the following conf
Let's say I have 4 class.
class A
{
//some fields of built-in type
}
abstract class B //Some class derived this one, but this is not important here
{
//some fields of built-in type
public A refA { get; set; }
}
class C
{
//some fields of built-in type
public B refB { get; set; }
}
class D
{
//some fields of built-in type
public B refC { get; set; }
}
I use AutoMapper to convert it to my DTO, lets assume for simplicity here that the DTO is an exact copy of these class.
I want to send this through the wire, so before serializing, I ask AutoMapper to convert it in the DTO corresponding to the D-Class.
If I make these Object and configure the field my-self, when I call
Mapper.Map<T1,T2>(T1 source)
This is working. So my configuration AutoMap is working. More its also working with
Mapper.Map<IList<T1>,List<T2>
Very well.
Now I make these object, I put them in a Database and call a request to my SQL DB with Nhibernate to retrieve an IList (List of class D).
If I now try to convert it in DTO, it doesnt work anymore.
I trace the code in AutoMap, it maps correctly all the built-in type field in class D and then it comes to the refC and here it crash somewhere.
I know about lazy-loading and the fact that Nhibernate just gimme a proxy of my ref to class C but I dont see how to solve this.
Just so you know the NHibernateUtil.IsInitialized(refC) is true
Many Thanks
You will have to unproxy your entities before passing it to automapper. This is basically the same issue as if you would run a Json serialization.
You can use
Session.GetSessionImplementation().PersistenceContext.Unproxy();
to unproxy something.
Or you disable lazy loading.
Or you do not use automapper and instead use standard transformations... e.g.
.Query().Select(p => new SomeDto(){ PropA = p.PropA, ...});
You can also use another standard way:
resultSet = session.CreateCriteria(typeof(DataObject))
.Add(query criteria, etc.)
.SetResultTransformer(Transformers.AliasToBean<DTOObject>())
.List<IDTOObject>()
Basically you don't have to iterate all of the props. It is enough to be the same all class props between your DTO and data object.