I have a code-first Entity Framework context that needs to support both SQL Server and Oracle.
The model itself is fairly straightforward, except for a problem with multiple CASCADE DELETE paths in SQL Server that I want to resolve with a trigger.
I can create the trigger fine with the SQL method on DbMigration, but I'd like to only create the trigger if the database I'm migrating is actually a SQL Server database.
I'd like to do something like the following:-
public override void Up()
{
...
if (this.Database.Connection.ProviderName == "System.Data.SqlClient")
{
this.CreateTable(...); // Create the table without CASCADE DELETE
this.Sql(...); // Create the trigger
}
else if (this.Database.Connection.ProviderName == "Oracle.ManagedDataAccess.Client")
{
this.CreateTable(...); // Create the table with CASCADE DELETE
}
...
}
My problem is that the DbMigration base class does not appear to provide any hook to interrogate the current database connection.
I can't interrogate the .config file via ConfigurationManager.ConnectionStrings because I'm likely to be overriding the connection string while using e.g. the Update-Database cmdlet.
Is there any way to interrogate the current database connection during a DbMigration? Are there any other hooks I could use?
For the benefit of future readers, I eventually resolved this issue for my own use-case (supporting both SQL Server and Oracle) by creating two data contexts with a shared interface for the same set of entities - one for SQL Server, and one for Oracle - detecting the ProviderName on the connection string, and injecting the appropriate context at runtime.
Not only did this let me maintain two sets of migrations with the database specific behaviour, it also allowed me to better support e.g. the naming conventions that our Oracle developers asked for, without imposing those conventions on our SQL Server deployments.
Related
i have a UI page which contains, a drop down, with multiple values.
From a UI page, user will select one schema from the drop down, then the data related to that schema should be loaded to a grid. That means in future we may get more number of schema each with the same Oracle database and table structure.
Entity context already created using DB First approach with default config. but based on above requirement, I need to connect to Oracle DB based on Schema change.
While I use below didn't worked for me, It always point to the schema configured at connection string, not the schema that i'm sending to entity context.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
if (SchemaName != null)
{
modelBuilder.HasDefaultSchema(SchemaName);
}
base.OnModelCreating(modelBuilder);
throw new UnintentionalCodeFirstException();
}
can anybody suggest best way to do it?
I tried by applying schema name at model creation as above code This didn't work for me.
Actual need is, Entity context already created using DB First approach with default config. but based on above requirement, I need to connect to Oracle DB based on Schema change.
I found the solution by adding a helper class which will update the entity context files at run time by replacing the schema.
I just followed the url EF6 Dynamic Schema Change, which works well for me.
I change the connection to use ORACLE, and called this 'connect' method from my service layer.
I suggest you create separate entity context for each Oracle schema. You can use the same Oracle account as long as that account has access to all the schemas (although I think it is way easier to use separate account for each schema). Depending on the schema selected at runtime, it is easy to use the correct entity context with an if-then-else statement. The schema for each entity class is embedded in the .edmx file, there's no worries that the query will fail even if using one Oracle account (provided access was granted).
Of course if you are only using one Oracle account then things gets complicated when creating the initial entity context. One approach is to use the original schema's account (or a temp account) then edit the app.config to the desired Oracle account afterwards or try this approach before removing the logon trigger once done (note: I didn't try this approach as I only tried the former).
Personally, I think having separate entity context for each schema with separate Oracle account is a cleaner and simpler approach rather than updating the entity context file dynamically.
Currently I have an ERP which is a Winforms based client (with SQL Server), which gets delivered and updated on desktops using ClickOnce.
The current version is using Entity Framework 4 (ObjectContext-based) and database first. The way I am doing updates to the client when there's a database schema change is a four step process:
Create an intermediary updated database schema on production with compatible columns (allow null everywhere or have a default value, etc.). Old clients can connect to that database and keep working as if nothing was changed
Update desktop clients to an intermediary version with the updated features which accounts for this intermediary schema but has all "final schema" features
Once all clients are updated and all records are compatible with the "final" schema, make a new update on the schema with the needed database constraints
Update all clients to a final version which is mapped to this final schema (which accounts for database constraints errors, and need those schema changes to work).
I've found this process to be, if a bit cumbersome to us, better for the clients, who can update when they see fit, and don't get interrupted with an update in the middle of their work (which may involve having customers in front of them who don't want to wait for a software update).
Now I have made an almost-complete rewrite of the client (still Winforms), using EF6 and code-first, with migrations.
I've been searching for documentation but can't find anything (seems there's only web programming these days, where generally updates to the database and the web client can be done simultaneously and without interrupting users), but once I apply migrations on production, non-updated clients can no longer work with the database. EF will complain and throw exceptions upon instantiating the context if it's not up to date with the database schema.
Specific question: is there a way to have an EF6 code-first dbcontext to work with a newer migration of the database schema than the one compiled-in, as long as it is compatible? If that's the case, I could just keep doing what I was doing so far.
And an (I guess) oppinion based question if anyone wants to extend on the actual answer: is there any better way to handle this scenario? I'm sure I'm not the only one having this problem, however the keywords needed to Google for documentation are too broad and so far, only web scenarios have come up on my searches.
I'm currently at a stage on the client rewrite where major changes could be allowed, so I don't care if the solution may complicate parts of the code
When an application initializes the model database, etiher by directly calling DbContext.Database.Initialize or instancing the first DbContext, it checks if the model in the application and the model in the database match.
To do so, it calculates the model hash, and compares it with the hash stored in the __MigrationHistory table (or in the EdmMetadata table, if it was updated from EF 4.x). This is done in the System.Data.Entity.Internal.ModelCompatibilityChecker.CompatibleWithModel method, which receives a parameter named throwIfNoMetadata which happens to be false in the internal implementation, so no exception is thrown if there is no metadata.
So, if you make this tables dissapear in some way before the database is initialized, you'll avoid the error. The important point is that you must do this change without using DbContext. If not, the database will try to initialize, and, if this table exists, it will fail. So you can use plain ADO.NET to drop the tables.
Take into account that the metadata tables can be automatically created, for example by applying migrations.
You can also use ctx.Database.CompatibleWithModel(true) to check if the database metadata exists and is compatible or not, to get rid of it. The parameter is precisely the throwIfNoMetadata that I mention above.
Compatibility check in db initializers:
The default DB Initializer is CreateDatabaseIfNotExists, and it does check the model compatibilty with the throwIfNoMetadata set to false. That's why this solution works. However, if you implement your own version of DB Initializer that doesn'd run the check, it shuld work.
public virtual void InitializeDatabase(TContext context)
{
Check.NotNull(context, "context");
var existence = new DatabaseTableChecker().AnyModelTableExists(context.InternalContext);
if (existence == DatabaseExistenceState.Exists)
{
// If there is no metadata either in the model or in the database, then
// we assume that the database matches the model because the common cases for
// these scenarios are database/model first and/or an existing database.
if (!context.Database.CompatibleWithModel(throwIfNoMetadata: false, existenceState: existence))
{
throw Error.DatabaseInitializationStrategy_ModelMismatch(context.GetType().Name);
}
}
else
{
// Either the database doesn't exist, or exists and is considered empty
context.Database.Create(existence);
Seed(context);
context.SaveChanges();
}
}
I have started introducing code first migrations in the project, however I have stumbled upon several issues I am not able to resolve.
The setup is, that the project has two targets: an online client, which connects to a WCF service and uses a regular SQL Server database. Also included is an offline client, which holds all data locally and uses a SQL Server CE database.
This already works. Now I need to introduce a way to migrate both database versions. Of course I would prefer using the same migrations code. What I have done so far, is:
enable-migrations (using a localhost SQL Server db, where I create the migrations against)
add-migration (for the initial migration)
One problem is that when I create my SQL Server CE database using CreateIfNotExists initializer, the db will be created with all string properties mapped to nvarchar columns.
However, when I start using migrations, and create my db with a MigrateToLatestVersion initializer, the db will created, but the string properties are now mapped to NTEXT columns.
A subsequent seed fails, because I get the following exception:
The ntext and image data types cannot be used in WHERE, HAVING, GROUP BY, ON, or IN clauses, except when these data types are used with the LIKE or IS NULL predicates.
I have tried to force the model builder to use nvarchar for strings, but to no avail. It is completely ignored.
modelBuilder.Properties<string>().Configure(config => config.HasColumnType("nvarchar"));
I am kind of lost here really.
I have found the solution and it is rather embarrasing, really.
I forgot to recreate my initial migration, after i added my modelbuilder code for the string properties.
I just recreated it now and it works.
In our software, we have a customer base with existing databases. The databases are currently accessed via EntitySpaces, but we'd like to switch to EntityFramework (v6), as EntitySpaces is no longer supported. We'd also like to make use of the migrations feature. Automatic migrations are disabled, since we only want to allow database migration to an admin user.
We generated the EF model from an existing database. It all works pretty well, but the real problem we have, is, programmatically distinguishing between existing databases that match the model but have not yet been converted to EF (missing MigrationsHistory table), and empty/new databases. Converting existing databases works well with an empty migration, but for new databases we also need a migration containing the full model. Having an initial migration in the migration chain always clashes with existing databases. Of course we could create a workaround with external SQL scripts or ADO commands, creating and populating the MigrationsHistory table. But that is something we'd like to avoid, because some of our clients use MsSql databases, some use Oracle. So we'd really like to keep the abstraction layer provided by EF.
Is there a way to get EF to handle both existing, and new databases through code-based migrations, without falling back to non-EF workarounds?
My original suggestion was to trap the exception raised by CreateTable, but it turns out this is executed in a different place so this cannot be trapped within the exception.
The simplest method of proceeding will be to use the Seed method to create your initial database if it is not present. To do this...
Starting from a blank database, add an Initial Create migration and grab the generated SQL
Add-Migration InitialCreate
Update-Database -Script
Save this script. You could add it to a resource, static file or even leave it inline in your code if you really want, it's up to you.
Delete all of the code from the InitialCreate migration (leaving it with a blank Up() and Down() function). This will allow your empty migration to be run, causing the MigrationHistory table to be generated.
In your Migration configuration class, you can query and execute SQL dynamically using context.Database.SqlQuery and context.Database.ExecuteSqlCommand. Test for the existence of your main tables, and if it's not present, execute the script generated above.
This isn't very neat, but it's simple to implement. Test it well, as the Seed method runs after EVERY migration runs, not just the initial one. This is why you need to test for the existence of a main table before you do anything.
The more complicated approach would be to write a "CreateTableIfNotExists" method for migrations, but this will involve use of Reflection to call internal methods in the DbMigration class.
In most asp.net applications you can change the database store by modifing the connectionstring at runtime. i.e I can change from using a test database to a production database by simply changing the value of the "database" field in the connectionstring
I'm trying to change the schema (but not necessarily the database itself) with entity framework but no luck.
The problem I'm seeing is the that the SSDL content in the edmx xml file is storing the schema for each entityset.
see below
<EntitySet
Name="task"
EntityType="hardModel.Store.task"
store:Type="Tables"
Schema="test" />
Now I have changed the schema attribute value to "prod" from test and it works..
But this does not seem to be a good solution.
I need to update evert entity set as well as stored procedures ( I have +50 tables )
I can only do this an compile time?
If I then try to later update the Entity model-entityies that already exist are being read due to EF not recognizing that the table already exists in the edm.
Any thoughts?
I have this same issue and it's really rather annoying, because it's one of those cases where Microsoft really missed the boat. Half the reason to use EF is support for additional databases, but unless you go code first which doesn't really address the problem.
In MS SQL changing the schema makes very little sense, because the schema is part of the identity of the tables. For other types of databases, the schema is very much not part of the identity of the database and only determines the location of the database. Connect to Oracle and changing the database and changing the schema are essentially synonymous.
Update Upon reading your comments it's clear that you're wanting to change the referenced schema for each DB, not the database. I've edited the question to clarify this and to restore the sample EDMX you provided which was hidden in the original formatting.
I'll repeat my comment below here:
If the schemata are in the same DB, you can't switch these at runtime (except with EF 4 code-only). This is because two identically-named and structured tables in two different schemata are considered entirely different tables.
I also agree with JMarsch above: I'd reconsider the design of putting test and production data (or, actually, 'anything and production data') in the same DB. Seems like an invitation to disaster.
Old answer below.
Are you sure you're changing the correct connection string? The connection string used by the EF is embedded inside the connection string which specifies the location of CSDL/SSDL/etc. It's common to have a "normal" connection string for use by some other part of your app (e.g., ASP.NET membership). In this case, when changing DBs you must update both of your connection strings.
Similarly, if you update the connection string at runtime then you must use specific tools for this, which understand the EF connection string format and are separate from the usual connection string builder. See the example in the link. See also this help on assigning EF connection strings.
The easiest way to solve the problem is to manualy remove all entries like 'Schema="SchemaName"' from the SSDL part of the model.
Everything works propely in this case.
Sorry its not a robust answer but I found this project on codeplex ( as well as this question ) while googling around for a similar problem:
http://efmodeladapter.codeplex.com/
The features include:
Run-time adjustment of model schema,
including:
Adjusting data-level table
prefixes or suffixes
Adjusting the
owner of database objects
Some code from the docs:
public partial class MyObjectContext : BrandonHaynes.ModelAdapter.EntityFramework.AdaptingObjectContext
{
public MyObjectContext()
: base(myConnectionString,
new ConnectionAdapter(
new TablePrefixModelAdapter("Prefix",
new TableSuffixModelAdapter("Suffix")),
System.Reflection.Assembly.GetCallingAssembly()))
{
...
}
}
Looks like its exactly what your looking for.
The connection string for EF is in the config file. There is no need to change the SSDL file.
EDIT
Do you have the prod and test schema in the same database?
If Yes you can fix it by using a seperate database for prod and test. Using the same schema name in both databases.
If No you can fix it by Using the same schema name in both databases.
If you will absolutly have different schema names, create two EF models, one for test and one for prod, then select which on to use in code based on a value in your config file.
When I create a new "ADO.NET Entity Data Model", there are two properties "Entity Container Name" and "Namespace" available for editing in design view.. Using the namespace.EntityContainerName, you can create a new instance specifying a connection string.
MyEntities e = new MyEntities("connstr");
e.MyTable.Count();
I'm not sure if this helps you or not, good luck!
Also, this is a good case for multiple layers (doesn't have to be projects, but could be).
Solution
* DataAccess - Entities here
* Service - Wraps access to DataAccess
* Consumer - Calls Service
In this scenario, the consumer calls service, passing in whatever factor determines which connection string is used. The service then instantiates an instance of data access passing in the appropriate connection string and executes the consumer's query.
Here is a similar question with a better answer:
Changing schema name on runtime - Entity Framework
The solution that worked for me was the one written by Jan Matousek.
Solved my problem by moving to sql server and away from mysql.
Mysql and Mssql interpret "schemas" differently. Schemas in mysql are the same/synonyms to databases. When I created the model the schema name..which is the same as the database name is hard coded in the generated model xml. In Mssql the schema is by default "dbo" which gets hard coded but this isnt an issue since in mssql schemas and databases are different.