Apparently, the same columns value-type differs across environments for the same database entity (table) & they refuse to update to a common type - don't ask why!
I am using Entity Framework (version 6.1.3) alongside a Unit of Work for data-access. And, as you can guess, I am getting errors because the DEV & the QA database definitions do not match for the same column.
THE GOOD NEWS:
We do not save into these particular tables - we only query those particular tables.
SAMPLE MODEL:
There are obviously more columns than this.
public partial class Transactions
{
[Key]
public int TransactionId { get; set; }
public float Amount { get; set; } //<-- This type differs between database environments
}
MY QUESTION:
Is there a way to dynamically bind the value for a column in Entity Framework?
Or, can I treat it as a dynamic under-the-hood...and transform it to an expected type which is constant to my model?
OPTIMALLY - AND TO BE CLEAR:
I would like to define the property concretely, and have Entity Framework "convert" from the unknown type & into the concrete type - but under-the-hood.
Any help is appreciated.
If the types of the columns are compatible (i.e. are all numbers) you can have a common type on the class then disable model checking (and migration). This solution could work on some DBMSs and not on some Others (depends on the provider).
You can write a view with cast and map it in your model (and not the table).
You can write a direct query with EF (official docs https://msdn.microsoft.com/en-us/data/jj592907.aspx similar to Dapper solution with same advantages and defects, in your case lazy load won't work).
Related
We are reviewing two different methods in generic repository patterns.
Currently, want to map primary keys to Ids. The purpose of this is to map to the Generic Repository Interface which utilizes Id. Two solutions are provided below.
What are performance implications of .FindPrimaryKey().Properties. Does it cause a schema lock on database table in trying to find the primary key? Does it cause any application slowness?
How does it compare in performance vs Partial Class Method Solution 2?
What option is better performance-wise?
Note: Architects demand the use of repository pattern at the workplace, so implementing it. Know there is debate surrounding this issue, but not my call.
Scaffolded Model Example:
namespace Datatest
{
public partial class Property
{
public int Property { get; set; }
public int DocumentId { get; set; }
public string Address { get; set; }
}
}
Sample Generic Base Repository for all tables:
public T Get(int id)
{
return Table.Find(id);
}
public async Task<T> GetAsync(int id)
{
return await Table.FindAsync(id);
}
public T Single(Expression<Func<T, bool>> predicate)
{
return All.Single(predicate);
}
public async Task<T> SingleAsync(Expression<Func<T, bool>> predicate)
{
return await All.SingleAsync(predicate);
}
public T FirstOrDefault(int id)
{
return All.FirstOrDefault(CreateEqualityExpressionForId(id));
}
Solution 1: FindPrimaryKey()
Generic Repository in C# Using Entity Framework
use EF FindPrimaryKey()
var idName = _context.Model.FindEntityType(typeof(TEntity))
.FindPrimaryKey().Properties.Single().Name;
Solution 2: Partial classes Mapping
Net Core: Create Generic Repository Interface Id Mapping for All Tables Auto Code Generation
public partial class Property: IEntity
{
[NotMapped]
public int Id { get => PropertyId; set => PropertyId = value; }
}
Regarding the first approach (using EF Core metadata services):
First, EF Core is ORM (Object Relational Mapper), with most important here is Mapper.
Second, it uses the so called code based model, which means all the mappings are provided by code and not the actual database (even though the model is created by reverse engineering of an existing database).
In simple words, EF Core creates at runtime a memory data structure containing the information (metadata) about classes and properties, and their mappings to database tables, columns and relationships. All that information is based on pure code model - the entity classes, conventions, data annotations and fluent configuration.
All EF Core runtime behaviors are based on that metadata model. EF Core uses it internally when building queries, mapping the query results to objects, linking navigation properties, generating create/update/delete commands and their order of execution, updating temporary FK property values after getting the real autogenerated principal key values etc.
Hence the metadata model and discovering services (methods) use optimized data structures and are (has to be) quite efficient. And again, no database operations are involved.
So the first approach is quite efficient. The performance impact of obtaining the PK property name via metadata service is negligible compared to actual query building, execution and materialization.
Also the performance of the first approach is similar to EF Core Find method which you are using in another method. Note that when calling Find method you just pass the PK value(s) and not the properties. So the method implementation should somehow know how to build the Where expression, right? And what it does internally is very similar to the suggested snippet.
Regarding the second approach:
It's simply not comparable because it doesn't work. It's possible to use base class/interface, but only if the actual property name is mapped - like all classes have Id property, and it's mapped to different column name in the database tables using [Column] data annotation or HasColumnName fluent API.
In your example, the Id property is [NotMapped] (ignored). Which means EF Core cannot map to the table column. The fact that your are mapping it to another property via code (property getter/setter) doesn't matter. EF Core is not a (de)compiler, it can't see your code, hence cannot translate a LINQ query using such properties to SQL.
Which in EF Core 2.x leads to either client evaluation (very inefficient, reading to whole table and applying the filter in memory), or exception if client evaluation is configured to do so. And in EF Core 3.0+ it will always be an exception.
So in case you don't remove properties like PropertyId and map the property Id (which would be hard with "database first" models), the second "approach" should be avoided. And even if you can map the actual Id property, all you'll save would be a few milliseconds. And again, when using Find you don't bother about performance, why bother with methods that uses the same (or similar) approach.
This question already has answers here:
Should Entities in Domain Driven Design and Entity Framework be the same?
(4 answers)
Closed 5 years ago.
I have a three tier app with a class library as the Infrastructure Layer, which contains an Entity Framework data model (database first).
Entity Framework creates entities under the Model.tt folder. These classes are populated with data from the database.
In the past I would map the classes created by Entity Framework (in the data project) to classes in the Domain project e.g. Infrastructure.dbApplication was mapped to Domain.Application.
My reading is telling me that I should be using the classes contained in .tt as the domain classes i.e. add domain methods to the classes generated by Entity Framework. However, this would mean that the domain classes would be contained in the Infrastructure project, wouldn't it? Is is possible to relocate the classes generated by Entity framework to the Domain project? Am I missing something fundamental here?
I think in the true sense it is a Data Model - not a Domain Model. Although people talk about having the Entity Framework Model as a domain concept, I don't see how you can easily retro fit Value objects such as say amount which would be represented in the true domain sense as such:
public class CustomerTransaction
{
public int Id { get; set; }
public string TransactionNumber { get; set; }
public Amount Amount { get; set; }
}
public class Amount
{
public decimal Value { get; }
public Currency Currency { get; }
}
As opposed to a more incorrect data model approach:
public class CustomerTransaction
{
public int Id { get; set; }
public string TransactionNumber { get; set; }
public int CurrencyType { get; set; }
public decimal Amount { get; set; }
}
Yes, the example is anaemic, but only interested in properties for clarity sake - not behaviour. You will need to change visibility of properties, whether you need default constructor on the "business/data object" for starters.
So in the domain sense, Amount is a value object on a Customer Transaction - which I am assuming as an entity in the example.
So how would this translate to database mappings via Entity Framework. There might be away to hold the above in a single CustomerTransaction table as the flat structure in the data model, but my way would to be add an additional repository around it and map out to the data structures.
Udi Dahan has some good information on DDD and ORM in the true sense. I thought somewhere he talked about DDD and ORM having the Data Model instance as a private field in the domain object but I might be wrong.
Also, that data model suffers from Primitive Obsession (I think Fowler coined it in his Refactoring book - although it Is in his book) Jimmy Bogard talks about that here.
Check out Udi Dahan stuff.
You should move your model to a different project. That is good practice. I don't quite get it what you meant by "moving to to Domain project" Normally entity framework generated classes are used as a domain model. No need for creating "different" domain model from this. This model should be use only near to database operations, whereas web(window) application should use only DTO (Domain transfer objects)
I don't know if you use it or not - but this is a nice tool allowing for recreating model from the database :
https://marketplace.visualstudio.com/items?itemName=SimonHughes.EntityFrameworkReversePOCOGenerator
This allows to store model in classes (instead of EDMX) Someone refers to it as "code first" but there is a misunderstanding. One can use this tool to create model and still be on "database first" This is done simply to omit using EDMX as a model definition.
You can relocate the entity classes by creating a new item in your Domain project: DbContext EF 6.x Generator (not sure of the name and you might have to install a plugin to get this item in the list, also exists for EF 5.x).
Once you have created this new item, you have to edit it to set the path of your EDMX at the very begining of the file. In my project for example it is:
const string inputFile = #"..\..\DAL.Impl\GlobalSales\Mapping\GlobalSalesContext.edmx";
You will also need to edit the DbContext.tt file to add the right using on top of the generated class. At each change you've done on the EDMX, you also will have to right click the generator and click: "Run custom tool" to generate the new classes.
That being said, is it a good practice? As you can see that's what I have done in my project. As long as you do not have EF specific annotations or stuff like that in the generated entity classes, I would said that it is acceptable.
If you need to change your ORM, you can just keep the generated classes and remove all the EF stuff (.tt files, etc) and the rest of your application will work the same. But that's opinion based.
I am trying to use entity framework code first method to connect to PostgreSQL database, and when I use entity data model wizard in visual studio to generate C# class from database, it can generate classes for each table in database successfully, but the views in database cannot be generated.
(source: linearbench.com)
(source: linearbench.com)
Can someone told me where I did wrong? I use Entity framework 6.1.3, with Npgsql 2.2.5. PosgreSQL database is version 9.3.6 installed on a Ubuntu server.
Thanks
I know this question is a little bit old now, but ill chime in here for anyone else who may be looking for solutions here. My answer may not be exactly what the question was looking for, however, it has sufficed as a work around solution for me.
The problem with views is that entity framework has a hard time determining the primary key column for them. In Sql Server, you can use ISNULL() function to trick EF into thinking that the column is a key column, but the equvilant coalesce() function in postgres isn't good enough for EF. I also tried generating auto-incrementing row id column, joining to other tables with primary keys, etc; no luck with any of these.
However, something that has just about emulated the functionality that I needed as far as being able to query my views into my view objects is to just extend your context class with functions that call Database.SqlQuery and return it as a Queryable
For example:
Suppose a view in your database, "foo", with columns id, bar, baz. You can write your own POCO to hold the view data like so
public class foo
{
public int id { get; set; }
public string bar { get; set; }
public string baz { get; set; }
}
and then extend your context class with a partial class definition like this
public partial class FooContext : DbContext
{
public IQueryable<foo> foo =>
this.Database.SqlQuery<foo>( "select * from foo" ).AsQueryable();
}
And then you can query it from your context just the same as any other table
context.foo.where( x => id > 100 ).toList(); //etc,etc
You wont be able to do inserts or use any of those extra capabilities that usually come with the standard DbSet, but Views are typically used as read-only queries anyways (unless youre using some special insert triggers)...
But this gives you a base call that will query the entire view, and it doesn't hit the database because its left as a queryable, so you're free to call any other LINQ extensions on it such as Where to filter it to the results you want.
I migrated from sql server to postgres sql using npgsql lib, and this fix allowed my views to work without having to make any changes to my programs codebase, just as if nothing had changed at all, and despite the fact that the edmx would not generate my view objects due to lack of a (discernible) primary key.
Hope this helps!
I got a code first EF and I want to use native sql for the more complex select statements.
When I try to execute:
using (VaultsDbContext db = new VaultsDbContext())
{
var contracts = db.Contracts.SqlQuery("select * from Contracts").ToList<Contract>();
}
I got:
Cannot create a value for property 'MetaProps' of type
'DskVault.Models.DbModels.MetaProps'. Only properties of primitive or
enumeration types are supported.
MetaProps is a class that holds deleteflag, creator etc. and it's a property of all my classes. It's not mapped to a different table, every table has deleteflag, createor, etc.
public class Contract
{
public long Id { get; set; }
...
public MetaProps MetaProps { get; set; }
}
Is there a way to map from the native sql to the class if the class contains a complex type or does EF not support that? Also what if the complex type is entity mapped to another table(join)?
Edit:
Version: Entity Framework 6
I know from experience not all the fields in your table have to be contained in your model. This is a good thing when it comes to installing updates into production.
Have you tried reverse engineering your tables on a SEPARATE temporary project using the Entity Framework Power tools? This is a Nuget package that I have found to be extremely useful in code first programming. Reverse engineering will overwrite existing files, so make sure not to do this on your live code.
I just learned about a genius type that would simplify a lot of my work but it looks like my preferred ORM does not recognize it.
Is there a workaround to let ServiceStack OrmLite recognize HierarchyId in SQL Server? Any suggestions about which files to modify and any hints how to proceed?
EDIT :
Here is a better illustration of the problem. I have the following class:
public class MyClass
{
public int Id { get; set; }
public SqlHierarchyId HierarchyId { get; set; }
}
SqlHierarchyId is a custom SQL Server data type. OrmLite will generate the following class for it:
Funny enough, I can use the [StringLength(255)] attribute on the property and it will get the varchar(255) type instead:
I manually changed the table here and added the column data type to showcase the difference. Please note the data type of the third column:
Having a varchar representation is perfectly fine with other DBMS as it can be converted within C#, but with SQL Server it is preferable to have it match the corresponding data type. This will make the creation of views easier (due to the built-in functions of the hierarchyid data type).
I know the type is not supported by EF4 (not sure about 5). I also browsed the OrmLiteDialectProviderBase.cs file on GitHub and I can see a list of supported ADO.NET data types.
My simple question is: Is this a strong limitation by ADO.NET or this can be seen sometime in OrmLite? I am willing to help extending this part if any suggestions are made.
ADO.NET has support for the hierarchyid type, an example can be found here and shows ADO.NET can read values from Sql Server as a hierarchyid directly but you need to pass parameters to the server as a string.
Adding support for the hierarchyid type methods to a ORM framework would break the abstraction between the ORM API and the RDMS. I would assume this is the reason such functionality has not been added to Entity Framework.
You could work around the issue by keeping a string representation of the hierarchy in your database and having the hierarchyid version as a computed property in both your database and your C# class, you would need to exclude the computed C# property from the ORM mapping.
For example your table column would be declared as:
[SqlHierarchyId] AS ([hierarchyid]::Parse([HierarchyId])) PERSISTED
and your class as:
public class MyClass {
public string HierarchyId {
get;
set;
}
[Ignore]
public SqlHierarchyId SqlHierarchyId {
get {
return SqlHierarchyId.Parse(new SqlString(HierarchyId));
}
set {
HierarchyId = value.ToString();
}
}
}
This will persisted updates from the .Net layer and allow you to use the hierarchyid methods to construct queries in SQL Server and work with materialised objects in the .Net layer.
You would have to construct queries against the string representation in you ORM layer but this could still leverage some of the hierarchyid helper methods, for example:
public IEnumerable<MyClass> GetDescendants(MyClass node) {
string currentLocation = node.HierarchyId;
string followingSibling
= node.SqlHierarchyId.GetAncestor(1)
.GetDescendant(node.SqlHierarchyId, SqlHierarchyId.Null)
.ToString();
return db.Select<MyClass>(n => n.HierarchyId > currentLocation
&& n.HierarchyId < followingSibling);
}
Aplogies if I have got the ORMLite syntax wrong.