Does anyone know if there is a way to use fluent migrator to create schema prefix tables for different users e.g.
UserMachine1 points to a database but their tables are prefixed with UserMachine1.TableName when they run the migrations.
UserMachine2 points to the same database but would generate UserMachine2.TableName in the same database.
As you can see the schema would be machine name specific in our case.
Can this be done in fluent migrator?
if it can how can you define the schema.VersionInfo table prefix. as out of the box, it just puts it under dbo.VersionInfo like all other tables.
Is this even a good idea? as I am not the inventor of this idea, supposedly due to moving to azure databases.
Many thanks for your thoughts and answers, cheers
I found that there was this link that detailed how you might do this https://fluentmigrator.github.io/articles/version-table-metadata.html still not sure it is a good idea, as then you would also have to rename primary keys and constraints if you do provide nicely named ones. Also how would you run custom SQL scripts to run against specific schema ones. All feels like adding extra work which is not giving much value.
Related
I'm setting up a data warehouse (in SQL Server) together with our engineers we got almost everything up and running. Our main application also uses SQL Server as backend, and aims to be code first while using the entity framework. In most tables we added a column like updatedAt to allow for incremental loading to our data warehouse, but there is a many-to-many association table created by the entity framework which we cannot modify. The table consists of two GUID columns with a composite key, so they are not iterable like an incrementing integer or dates. We are now basically figuring out the options on how to enable incremental load on this table, but there is little information to be found.
After searching for a while I mostly came across posts which explained how it's not possible to manually add columns (such as updatedAt) to the association table, such as here Create code first, many to many, with additional fields in association table. Suggestions are to split out the table into two one-to-many tables. We would like to prevent this if possible.
Another potential option would be to turn on change data capture on the server, but that would potentially defeat the purpose of code first in the application.
Another thought was to add a column in the database itself, not in code, with a default value of the current datetime. But that might also be impossible / non compatible with the entity framework, as well as defeating the code first principle.
Are we missing anything? Are there other solutions for this? The ideal solution would be a code first solution, or a solution in the ETL process without affecting the base application, without changing too much. Any suggestions are appreciated.
Using Entity Framework, I have created an application which is able to read data from the database it was modeled against. I now have another database, with the same tables, hosted on another server.
The problem is that the tables on the second database belong to a schema with a different name to the original, so simply changing the connection string for my context in the app.config file doesn't work. (I get the error "table or view does not exist"). There must be some mapping somewhere in the auto generated code stating the original schema name.
What is the correct way to handle this kind of situation?
I don't really want to have to re-model the second database as it is identical to the first.
I can't change the databases as other applications would stop working.
Any nudge in the right direction would be greatly appreciated.
OK, so here's what I've done to solve this.
As #Kelmen mentioned in the comments, opening the EDMX file in a text editor revealed that this is where the schema information is stored. So I think I could have simply cleared out the value of the schema attribute and used the connection string to drive the schema.
This didn't feel right for a couple of reasons:
If the model was refreshed at any point, it might have repopulated the schema names, which would be REALLY annoying. I didn't have time to test if this would actually happen.
This method wouldn't let me control the schema name if I did need to change it at runtime.
The solution was to use Code First and the Fluent API to edit the model configuration in the OnModelCreating event within my derived DbContext class.
I'm now considering modifying my context class so that I can pass in the name of the schema or possibly drive it from my app.config.
I found the links tutorials to be quite useful:
Change Schema of Entity Framework
Entity Framework Change Schema Name Per Connection
I'm building a small db-cleaner app for a QA sql server database. Naturally, I need to delete table rows with dependencies on them.
T-SQL cascading abilities are very limited, so I've tried using NHibernate to simplify matters. But, the only way I found for this was to create a collection for each dependency
in the object-to-delete, and mark that as cascade=delete.
That means creating many, many collections (both in the hbm file and in the C# object) which I don't need for any other purpose. Which makes this method as complicated as just using SQL.
Am I missing something? Is there any easier, more generic way to perform delete-cascade?
Thanks.
EDIT: Just to be clear, I avoid changing the foreign keys in the DB because it's a QA DB, designed to be identical to the production DB.
Eventually I found out a generic way to do the deletion:
This guy wrote a recursive SP which does all the work for you:
http://www.sqlteam.com/article/performing-a-cascade-delete-in-sql-server-7
Needed a little touch-ups (since my DB uses schemas) but works like a charm.
I suppose you have foreign keys defined between related tables in your database ?
You can specify at the foreign key level what should happen with related records when a parent record is being removed.
Check out MSDN for the cascading options, and how to define them:
Cascading FK constraints
Foreign Key Constraints
In writing an application that runs on Fluent Nhibernate/Nhibernate, something has me a bit concerned. I suppose this would be true of any ORM (and even without using an ORM), but what is the ... I guess the word is 'field of study' that relates to the best practices and methods for updating a database after deployment?
In nHibernate, I establish a SessionFactory and have an initial run where it writes the database out based on the mappings. That's fine and good, I can even write the database out manually. But what about when my client comes back and wants something new added? Can I append to the database without losing my data? I am completely new to all of this and it has been troubling me since the start of this project, and I really do not know what direction to go to make sure I can manage the program after it is deployed.
I have looked at other stack overflow questions that I could find regarding this topic - one of which did not even have an accepted answer (though the question itself was kind of vague), but I did discover the tool http://www.red-gate.com/products/sql-development/sql-compare/ from the question
Tool to upgrade SQL Express database after deployment though I am wondering just how good of a 'strategy' that is.
There are a couple of options, use the AutoMapping feature in Fluent NHibernate to minimize the mapping code you write. If your schema changes comply with the AutoMap conventions then you only need to work with the corresponding domain object changes.
Another less optimal option is to take a database first approach and have something like MyGeneration automatically generate the domain classes and NHibernate mapping files from the schema. This works if you have complete control of the database schema and it can be made to implement a good domain model design (both conditions which very rarely ever happen...)
In either approach, these tools can help handle the database scripting needed to "migrate" the schema changes to a new version
from my experience, after deployment you have to manually keep your db structure up-to-date.
that means that whenever you add / change your db structure, you do so using a script with DDL commands.
when you're ready to deploy, you just run those DDL scripts against your production db.
for example, if you add a 'bar' column to your 'foo' table, your script would be something like (pseudo-code):
ALTER TABLE foo ADD COLUMN 'bar' int(32) not null default(0);
In most asp.net applications you can change the database store by modifing the connectionstring at runtime. i.e I can change from using a test database to a production database by simply changing the value of the "database" field in the connectionstring
I'm trying to change the schema (but not necessarily the database itself) with entity framework but no luck.
The problem I'm seeing is the that the SSDL content in the edmx xml file is storing the schema for each entityset.
see below
<EntitySet
Name="task"
EntityType="hardModel.Store.task"
store:Type="Tables"
Schema="test" />
Now I have changed the schema attribute value to "prod" from test and it works..
But this does not seem to be a good solution.
I need to update evert entity set as well as stored procedures ( I have +50 tables )
I can only do this an compile time?
If I then try to later update the Entity model-entityies that already exist are being read due to EF not recognizing that the table already exists in the edm.
Any thoughts?
I have this same issue and it's really rather annoying, because it's one of those cases where Microsoft really missed the boat. Half the reason to use EF is support for additional databases, but unless you go code first which doesn't really address the problem.
In MS SQL changing the schema makes very little sense, because the schema is part of the identity of the tables. For other types of databases, the schema is very much not part of the identity of the database and only determines the location of the database. Connect to Oracle and changing the database and changing the schema are essentially synonymous.
Update Upon reading your comments it's clear that you're wanting to change the referenced schema for each DB, not the database. I've edited the question to clarify this and to restore the sample EDMX you provided which was hidden in the original formatting.
I'll repeat my comment below here:
If the schemata are in the same DB, you can't switch these at runtime (except with EF 4 code-only). This is because two identically-named and structured tables in two different schemata are considered entirely different tables.
I also agree with JMarsch above: I'd reconsider the design of putting test and production data (or, actually, 'anything and production data') in the same DB. Seems like an invitation to disaster.
Old answer below.
Are you sure you're changing the correct connection string? The connection string used by the EF is embedded inside the connection string which specifies the location of CSDL/SSDL/etc. It's common to have a "normal" connection string for use by some other part of your app (e.g., ASP.NET membership). In this case, when changing DBs you must update both of your connection strings.
Similarly, if you update the connection string at runtime then you must use specific tools for this, which understand the EF connection string format and are separate from the usual connection string builder. See the example in the link. See also this help on assigning EF connection strings.
The easiest way to solve the problem is to manualy remove all entries like 'Schema="SchemaName"' from the SSDL part of the model.
Everything works propely in this case.
Sorry its not a robust answer but I found this project on codeplex ( as well as this question ) while googling around for a similar problem:
http://efmodeladapter.codeplex.com/
The features include:
Run-time adjustment of model schema,
including:
Adjusting data-level table
prefixes or suffixes
Adjusting the
owner of database objects
Some code from the docs:
public partial class MyObjectContext : BrandonHaynes.ModelAdapter.EntityFramework.AdaptingObjectContext
{
public MyObjectContext()
: base(myConnectionString,
new ConnectionAdapter(
new TablePrefixModelAdapter("Prefix",
new TableSuffixModelAdapter("Suffix")),
System.Reflection.Assembly.GetCallingAssembly()))
{
...
}
}
Looks like its exactly what your looking for.
The connection string for EF is in the config file. There is no need to change the SSDL file.
EDIT
Do you have the prod and test schema in the same database?
If Yes you can fix it by using a seperate database for prod and test. Using the same schema name in both databases.
If No you can fix it by Using the same schema name in both databases.
If you will absolutly have different schema names, create two EF models, one for test and one for prod, then select which on to use in code based on a value in your config file.
When I create a new "ADO.NET Entity Data Model", there are two properties "Entity Container Name" and "Namespace" available for editing in design view.. Using the namespace.EntityContainerName, you can create a new instance specifying a connection string.
MyEntities e = new MyEntities("connstr");
e.MyTable.Count();
I'm not sure if this helps you or not, good luck!
Also, this is a good case for multiple layers (doesn't have to be projects, but could be).
Solution
* DataAccess - Entities here
* Service - Wraps access to DataAccess
* Consumer - Calls Service
In this scenario, the consumer calls service, passing in whatever factor determines which connection string is used. The service then instantiates an instance of data access passing in the appropriate connection string and executes the consumer's query.
Here is a similar question with a better answer:
Changing schema name on runtime - Entity Framework
The solution that worked for me was the one written by Jan Matousek.
Solved my problem by moving to sql server and away from mysql.
Mysql and Mssql interpret "schemas" differently. Schemas in mysql are the same/synonyms to databases. When I created the model the schema name..which is the same as the database name is hard coded in the generated model xml. In Mssql the schema is by default "dbo" which gets hard coded but this isnt an issue since in mssql schemas and databases are different.