We have a software suite with different components and a sql server database that is used by most of these components.
Let's say we have a table Configuration with two columns [Name]and [Value]. This table contains our db scheme version (as Name='Version', Value='1.0').
Then we have a table Customer with columns ID and Name.
So the class for linq-to-sql queries on this table would look something like that:
[Table(Name="Customer")]
public class Customer
{
[Column]
public long ID {get; private set;}
[Column]
public string Name {get; set; }
}
For our next release version, we need to change this Customer table and add a column Address. When we install this new scheme, we set the version value in the Configuration table to 1.1, so we can determine which version is installed.
A lot of our customers are waiting for this new version of the component because of a bugfix, but they don't actually need the new db schema (the reasons don't matter here, let's just assume our software needs backward compatibility to old db schema versions).
So my question is, is there any best practice technology to handle this situation?
If I add the new column to the Customer class like
[Column]
public string Address {get; set;}
I won't be able to use this class for requests to a db with the 1.0 schema (because of the SqlException for the missing column). If I create a new class (derived, or use interfaces or whatsoever), I have to deal with different types for different schema versions. And of course Table<Customer10> will never be convertible to Table<Customer11> no matter what the inheritance relation between Customer10 and Customer11 will be. So until now I did not find a way to encapsulate these problems inside my DataContext derivate in a transparent way:
public class MyDataContext : DataContext
{
public Table<Customer> Customers
{
get
{
// this won't compile, no matter how Customer10 and Customer11 are related
return version > 10
? GetTable<Customer11>()
: GetTable<Customer10>();
}
}
}
I also experimented with inheritance hierarchies and discriminators, but since the version is not a column of Customer, this does not seem to work.
I'd like to find a solution where I can use one type per table for different schemas and where removed or added columns would take default values if they are not supported in the installed db scheme.
Related
I want to do some changes in a table that is already exist(i am using sqlite). I want to remove a Attribute of a property in a class. Remove [Required] attribute. How can i do that, what should i change, should i some changes in DbContext or migration folder or what commands can i use in package manager console.
public class Appointment
{
[Required]
[MaxLength(50)]
public string Company { get; set; }
This is a good example of why it is better to use fluent API then attributes to specify your database: you want the same class to be used in a different database.
The DbContext defines the database: what tables are in it, how do the tables relate towards each other, what classes do represent the tables, and what are the constraints to the tables.
For instance, maybe in one database you want Company to have a MaxLength of 50 characters, in another database you might desire a length of 100 characters. Some databases need a simple DateTime, others require a DateTime2. Or maybe you want a different precision for your decimals?
Hence it is usually better to specify the database statistics where it belongs: in the definition of the database, which is the DbContext.
Back to your question
It depends a bit on the entity framework that you are using on how the database reacts if during migration you use remove the Required attribute and move it to fluent API. I guess it is best to experiment with it.
In OnModelCreating, or the migration equivalent of it, you will have something like:
var entityAppointment = modelBuilder.Entity<Appointment>();
var propertyCompany = entityAppointment.Property(appointment => appointment.Company);
propertyCompany.IsOptional()
.HasMaxLength(50)
...; // other non-default specifications of Company
// like columnName, IsUnicode, etc
When migrating the statements might be little different, but I guess you get the gist.
My issue is in the next.
I have the next simple model in my code:
public class Client
{
public Guid Id { get; set; }
public string Name { get; set; }
}
I defined a mapping for it:
public class CustomMappings : Mappings
{
public CustomMappings()
{
For<Client>().TableName("clients")
.PartitionKey(x => x.Id);
}
}
I created the table via Table<TEntity>.CreateIfNotExist() method:
var table = new Table<Client>(session);
table.CreateIfNotExists();
And I can insert my data by the next way:
IMapper mapper = new Mapper(session);
var client = new Client
{
Id = Guid.NewGuid(),
Name = "John Smith"
};
await mapper.UpdateAsync(client);
After this, I've changed my model by adding a new property:
public class Client
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Surname { get; set; }
}
I need to alter this table, because I want to add surname column to it.
Of course, I have the exception without it when I try to insert a value:
Cassandra.InvalidQueryException: Undefined column name surname
at Cassandra.Requests.PrepareHandler.Prepare(PrepareRequest request, IInternalSession session, Dictionary`2 triedHosts)
at Cassandra.Requests.PrepareHandler.Prepare(IInternalSession session, Serializer serializer, PrepareRequest request)
at Cassandra.Session.PrepareAsync(String query, IDictionary`2 customPayload)
at Cassandra.Mapping.Statements.StatementFactory.GetStatementAsync(ISession session, Cql cql, Nullable`1 forceNoPrepare)
at Cassandra.Mapping.Mapper.ExecuteAsync(Cql cql)
But class Cassandra.Data.Linq.Table<TEntity> does not contain neither nor .AlterOrCreate() nor .Alter() methods. Also, we don't have .GetAlter() method in Cassandra.Mapping.Statements.CqlGenerator.
Which way is more appropriate to solve this problem? I have two assumptions (besides creating a pull request with needed methods to datastax csharp driver repository on github :)).
To alter tables via cql script in .cql file which will be executed in c# code.
To create a new table after each changes of a model and migrate old data to it.
I'm a newbee in Cassandra and I have suspicions that needed method does not exist in the library for good reason. Maybe, are there any problems with consistency after altering because Cassandra is distributed database?
Changes in the Cassandra's schema should be done very accurately - you're correct about distributed nature of it, and when making changes you need to take into account. Usually it's recommended to make changes via only one node, and after execution of any DDL statement (create/drop/alter) you need to check for schema agreement (for example, via method CheckSchemaAgreementAsync of Metadata class), and don't execute next statement until schema is in agreement.
Talking about changes themselves - I'm not sure that C# driver is able to automatically generate the changes for schema, but you can execute the changes as CQL commands, as described in documentation (please read carefully about limitations!). The changes in schema could be separated into 2 groups:
That could be applied to table without need to migrate the data
That will require creation of new table with desired structure, and migration of data.
In the first group we can do following (maybe not a full list):
Add a new regular column to table
Drop a regular column from table
Rename the clustering column
Second group includes everything else:
Changing the primary key - adding or removing columns to/from it
Renaming of non-clustering columns
Changing the type of the column (it's really recommended to create completely new column with required type, copy data, and then drop original column - it's not recommended to use the same name with different type, as it could make your data inaccessible)
Data migration could be done by different tools, and it may depend on the specific requirements, like, type change, etc. But it's a different story.
I am trying to use entity framework code first method to connect to PostgreSQL database, and when I use entity data model wizard in visual studio to generate C# class from database, it can generate classes for each table in database successfully, but the views in database cannot be generated.
(source: linearbench.com)
(source: linearbench.com)
Can someone told me where I did wrong? I use Entity framework 6.1.3, with Npgsql 2.2.5. PosgreSQL database is version 9.3.6 installed on a Ubuntu server.
Thanks
I know this question is a little bit old now, but ill chime in here for anyone else who may be looking for solutions here. My answer may not be exactly what the question was looking for, however, it has sufficed as a work around solution for me.
The problem with views is that entity framework has a hard time determining the primary key column for them. In Sql Server, you can use ISNULL() function to trick EF into thinking that the column is a key column, but the equvilant coalesce() function in postgres isn't good enough for EF. I also tried generating auto-incrementing row id column, joining to other tables with primary keys, etc; no luck with any of these.
However, something that has just about emulated the functionality that I needed as far as being able to query my views into my view objects is to just extend your context class with functions that call Database.SqlQuery and return it as a Queryable
For example:
Suppose a view in your database, "foo", with columns id, bar, baz. You can write your own POCO to hold the view data like so
public class foo
{
public int id { get; set; }
public string bar { get; set; }
public string baz { get; set; }
}
and then extend your context class with a partial class definition like this
public partial class FooContext : DbContext
{
public IQueryable<foo> foo =>
this.Database.SqlQuery<foo>( "select * from foo" ).AsQueryable();
}
And then you can query it from your context just the same as any other table
context.foo.where( x => id > 100 ).toList(); //etc,etc
You wont be able to do inserts or use any of those extra capabilities that usually come with the standard DbSet, but Views are typically used as read-only queries anyways (unless youre using some special insert triggers)...
But this gives you a base call that will query the entire view, and it doesn't hit the database because its left as a queryable, so you're free to call any other LINQ extensions on it such as Where to filter it to the results you want.
I migrated from sql server to postgres sql using npgsql lib, and this fix allowed my views to work without having to make any changes to my programs codebase, just as if nothing had changed at all, and despite the fact that the edmx would not generate my view objects due to lack of a (discernible) primary key.
Hope this helps!
I used EF6 Database First tools to generate C# classes for 2 tables from my database, then (as advised in the blog post that helped me through the steps to do that) copied the resulting .cs files into a new project. I made a few edits to the classes to support sensible names in my C# code. Here's a snippet of one of the classes with "LongTableName" replacing a strangely long name used in the database.
namespace RidesData
{
[Table("LongTableName")]
public partial class PhoneData
{
[Key]
[Column("LongTableNameID")]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int ID { get; set; }
[Column("LongTableNameAccountID")]
public int AccountID { get; set; }
// more fields
}
}
I am not in control of the table names, nor the fact that the many of the column names have the table name as prefixes. But the Code First ideas in EF6 should, I thought, let me use reasonable class and field names despite that. (The Database First code generator did a good job of adding code to OnModelCreating to specify that none of the columns corresponding to C# string data used Unicode.)
My model (generated by the EF6 tools and that inherits from DbContext) includes (after some renaming by me)
public virtual DbSet<PhoneData> PhoneRecs { get; set; }
and I thought all would be fine when I created an instance of PhoneData, populated it, and did
Model.PhoneRecs.Add(phoneData);
but the first thing that happened when I ran the code -- well before any call to SaveChanges() -- was that EF generated CREATE TABLE statements for the two tables; the table corresponding to the snippet above was named PhoneDatas (not using the specified table name) and the column names were the same as the field names in the class (not what was specified in the Column(...) attributes).
Of course the table I had specified did not need to be created. EF just had to grok that I wanted to use the table and column names I had specified via attributes.
I did not expect this failure of explicit Code First attributes. Does anyone have a clue why this isn't doing what I want, or how to fix it? (Do I have to do something to specify the table & column names in OnModelCreating as well as -- or instead of -- the attributes?)
Note that the project that I copied these classes into had never "seen" the database before. There are no vestiges of any "models" left over from tooling having looked at the database. Also, I hope it does not matter that I've tried to keep things on .Net 4.0 (avoiding going to 4.5 in this code).
Any assistance would be appreciated.
I'm not a big fan of DataAnotations either. Use EntityTypeConfiguration. It gives you the naming flexibility I think you are looking for.
Example.
public class PhoneData
{
public int ID {get;set;}
public string SomeProperty {get;set;}
}
public class PhoneDataMap : EntityTypeConfiguration<PhoneData>
{
public PhoneDataMap()
{
ToTable("WhatEverYou_Want_to_call_this");
HasKey(m => m.Id);
Property(m => m.SomeProperty).HasColumnName("whatever").IsRequired();
//etc.
}
}
Then in your on ModelCreating you add
modelBuilder.Configuration.Add(new PhoneDataMap());
On a side note, if you are having trouble with pluralization of your table names you can add this to OnModelCreating as well
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
I got a code first EF and I want to use native sql for the more complex select statements.
When I try to execute:
using (VaultsDbContext db = new VaultsDbContext())
{
var contracts = db.Contracts.SqlQuery("select * from Contracts").ToList<Contract>();
}
I got:
Cannot create a value for property 'MetaProps' of type
'DskVault.Models.DbModels.MetaProps'. Only properties of primitive or
enumeration types are supported.
MetaProps is a class that holds deleteflag, creator etc. and it's a property of all my classes. It's not mapped to a different table, every table has deleteflag, createor, etc.
public class Contract
{
public long Id { get; set; }
...
public MetaProps MetaProps { get; set; }
}
Is there a way to map from the native sql to the class if the class contains a complex type or does EF not support that? Also what if the complex type is entity mapped to another table(join)?
Edit:
Version: Entity Framework 6
I know from experience not all the fields in your table have to be contained in your model. This is a good thing when it comes to installing updates into production.
Have you tried reverse engineering your tables on a SEPARATE temporary project using the Entity Framework Power tools? This is a Nuget package that I have found to be extremely useful in code first programming. Reverse engineering will overwrite existing files, so make sure not to do this on your live code.