Currently I have an ERP which is a Winforms based client (with SQL Server), which gets delivered and updated on desktops using ClickOnce.
The current version is using Entity Framework 4 (ObjectContext-based) and database first. The way I am doing updates to the client when there's a database schema change is a four step process:
Create an intermediary updated database schema on production with compatible columns (allow null everywhere or have a default value, etc.). Old clients can connect to that database and keep working as if nothing was changed
Update desktop clients to an intermediary version with the updated features which accounts for this intermediary schema but has all "final schema" features
Once all clients are updated and all records are compatible with the "final" schema, make a new update on the schema with the needed database constraints
Update all clients to a final version which is mapped to this final schema (which accounts for database constraints errors, and need those schema changes to work).
I've found this process to be, if a bit cumbersome to us, better for the clients, who can update when they see fit, and don't get interrupted with an update in the middle of their work (which may involve having customers in front of them who don't want to wait for a software update).
Now I have made an almost-complete rewrite of the client (still Winforms), using EF6 and code-first, with migrations.
I've been searching for documentation but can't find anything (seems there's only web programming these days, where generally updates to the database and the web client can be done simultaneously and without interrupting users), but once I apply migrations on production, non-updated clients can no longer work with the database. EF will complain and throw exceptions upon instantiating the context if it's not up to date with the database schema.
Specific question: is there a way to have an EF6 code-first dbcontext to work with a newer migration of the database schema than the one compiled-in, as long as it is compatible? If that's the case, I could just keep doing what I was doing so far.
And an (I guess) oppinion based question if anyone wants to extend on the actual answer: is there any better way to handle this scenario? I'm sure I'm not the only one having this problem, however the keywords needed to Google for documentation are too broad and so far, only web scenarios have come up on my searches.
I'm currently at a stage on the client rewrite where major changes could be allowed, so I don't care if the solution may complicate parts of the code
When an application initializes the model database, etiher by directly calling DbContext.Database.Initialize or instancing the first DbContext, it checks if the model in the application and the model in the database match.
To do so, it calculates the model hash, and compares it with the hash stored in the __MigrationHistory table (or in the EdmMetadata table, if it was updated from EF 4.x). This is done in the System.Data.Entity.Internal.ModelCompatibilityChecker.CompatibleWithModel method, which receives a parameter named throwIfNoMetadata which happens to be false in the internal implementation, so no exception is thrown if there is no metadata.
So, if you make this tables dissapear in some way before the database is initialized, you'll avoid the error. The important point is that you must do this change without using DbContext. If not, the database will try to initialize, and, if this table exists, it will fail. So you can use plain ADO.NET to drop the tables.
Take into account that the metadata tables can be automatically created, for example by applying migrations.
You can also use ctx.Database.CompatibleWithModel(true) to check if the database metadata exists and is compatible or not, to get rid of it. The parameter is precisely the throwIfNoMetadata that I mention above.
Compatibility check in db initializers:
The default DB Initializer is CreateDatabaseIfNotExists, and it does check the model compatibilty with the throwIfNoMetadata set to false. That's why this solution works. However, if you implement your own version of DB Initializer that doesn'd run the check, it shuld work.
public virtual void InitializeDatabase(TContext context)
{
Check.NotNull(context, "context");
var existence = new DatabaseTableChecker().AnyModelTableExists(context.InternalContext);
if (existence == DatabaseExistenceState.Exists)
{
// If there is no metadata either in the model or in the database, then
// we assume that the database matches the model because the common cases for
// these scenarios are database/model first and/or an existing database.
if (!context.Database.CompatibleWithModel(throwIfNoMetadata: false, existenceState: existence))
{
throw Error.DatabaseInitializationStrategy_ModelMismatch(context.GetType().Name);
}
}
else
{
// Either the database doesn't exist, or exists and is considered empty
context.Database.Create(existence);
Seed(context);
context.SaveChanges();
}
}
Related
I apologize if this is duplicative; I could find nothing directly pertaining.
The difficulty involves EF Core (v 3.1.8, if it matters), but is not specific or restricted thereto. I am doing code first, creating a number of entities, but the key point is that I am getting my initial data set from an app that I am trying to replace. My new app has a number of structural differences in every corresponding entity, but the data in the old app is still critical, so I will be transferring it to my new database. (Old db is hosted by MS SQL 2008; new db is hosted by MS SQL 2019, if it matters).
Most of the key fields are GUIDs, and the problem is that in EF Core, at the point in the future when I want to use the new app to do more data entry, I will also want the database to choose the GUID. In EF Core Fluent API parlance, that would be, for example:
modelBuilder.Entity("ReplaceOldApp.Models.Address", b =>
{
b.Property<Guid>("AddressID")
.ValueGeneratedOnAdd()
.HasColumnType("uniqueidentifier");
}
However, if I inform EF Core that I want the database to create the key, then it will create the tables such that when I try to transfer the data from the old database (whether using EF or some other means), the new database will ignore the old GUID and create a new, unrelated one. (Or at least, that's what I think will happen. I'm not ready to try it yet.) If that happens, then all of the data from, say, the old Person entity (such as the above-implied Address entity), will no longer be related between their corresponding entities in the new database, because all records will have shiny new GUIDs. I will have all the information, and no way to actually use it.
Obviously I can tell EF Core to inform the database that it will not be creating the GUIDs, and I can then read, unmunge and transfer the data from the old database to the new without fear of data loss (God willing). But then going forward, for any new data entry, the GUIDs will not be automatically genned. I can of course then mod my IEntityTypeConfiguration Fluent API classes for the various entities and do a second migration, re-genning the affected tables, but I'm worried that EF Core will decide that it needs to DROP the tables to accommodate such a change. (Again, I do not know for sure because I have not tried it: sorry.)
So my question is: How would you approach such a situation? Should I ignore EF and do something clever with MS SQL Studio? Should I do two migrations with a transfer in-between? Should I tell the database, even though it has been told to gen the keys, somehow to accept the old keys without changing things, perhaps via LINQ?
============== Edit:
I'm sure SSIS would work to transfer the data from old to new databases, but the learning curve appears daunting, and I am only trying to solve one problem, not gain a new career. Powershell ditto, although it may be a bit more of a hacker's tool, and as such knowledge of it might assist tweaking or help to solve a diverse set of one-time SQL Server headaches. However, again, as would you, I prefer to use what I know, or failing that, learn or learn more about a tool which promises to serve me consistently into the future.
With the very welcome new (to me) information about IDENTITY_INSERT, and information gained from Linq To Sql and identity_insert, I believe I should not use LINQ to SQL because it may assume that IDENTITY_INSERT is OFF and simply filter out the crucial GUID, failing therefore to provide it to the target server. Rather, it seems I can use C# to produce a series of generated SQL statements, and then run each one on the target server inside a TransactionScope(). Because each such insert will thereby run 'in the same connection', the state of IDENTITY_INSERT will be preserved for that entire insert transaction, and (creek don't rise) it should work.
Again, I appreciate your answer, Randy in Marin. It has, it seems, led me to an approach that will work within the potential constraints of my context (EF Core), while allowing me to preserve the crucial existing IDENTITY information. Peace.
Not being an EF programmer, I don't know if there is an option for identity insert that you can enable for a migration. You might search the term to see if it comes up.
Our team support database migrations. We can do it a number of ways. I would not even consider EF because it's not designed for data migrations - or for database design. (And because we tend to use what we know.)
This is not the way I would do it, but it might be better than SSIS if you have not used SSIS. If the tables are in the same database or in databases on the same server, you can use T-SQL to load each table one at a time. Even if not on the same server, a linked server would allow a distributed transaction. (I avoid linked servers like the plague, but for a one time thing like a migration I would tolerate it. I would rather restore a copy of the source database to the destination server to use as a source. Distributed transactions gone wrong have forced me to reboot critical servers.)
Each table can have a 4 part name. If the server part (e.g., using a linked server name) is not present, the local instance is used. If the database part is not present, the current database is used. This is the format I assume for the "src_table" and "dst_table".
[myserver\myinstance].[mydatabase].[myschema].[mytable]
Each table is loaded with T-SQL as follows:
TRUNCATE TABLE dst_table
SET IDENTITY_INSERT dst_table ON
INSERT dst_table (...) SELECT ... FROM src_table
SET IDENTITY_INSERT dst_table OFF -- must be turned off - only 1 table can have this ON
If there are foreign keys, some tables (e.g., def tables) would need to be loaded first.
If the table does not have an IDENTITY column (EF code creates all values), you don't use the IDENTITY_INSERT stuff. It will fail if you use it and there is not an identity column. It will fail if you don't use it and try to insert into an identity column.
If there is a lot of data in a table, the transaction might be too big or slow. Inserting in batches might be called for.
If it was something to run on a schedule, I would likely create a SSIS package to do the load.
If I wanted to try something new, I would use powershell and the DBATools module cmdlets to see if extracting to csv and importing the csv would be efficient. The import cmdlet has a column mapping parameter, among many others. PowerShell could be used to do transformation, but I think this crosses over into SSIS territory.
I have dealt with migrations where the GUIDs and IDs no longer related after the move. Using queries joining the new data to the old data, we were able to fix the related values. It's likely more work to fix it after than to plan for it to be correct from the start.
Could someone explain the concept of Migrators (specifically fluentmigrator)?
Here are the (possibly confused) facts Ive gleaned on the subject:
Is it a way to initially create then maintain updates for a database
by way of versioning.
The first migration (or initial version of the
database) would contain all the tables, relationships and properties
required (done either fluently or using a chunk of sql in a script).
When you want to push a change to a database, you would create a new
migration method (Up and Down), something like add a new table or modify a field.
To deploy one of these migrations, you would use a
command line specifying the dll containing the migration, the
connection string and the required version.
If you had a rather complex set of data models, wouldn't it be rather difficult and time consuming to create a migration definition for all of that?
I know with nHibernate/fluent you can easily generate tables for a database without having to define anything other than the models and map files. Is there a way to make this configuration compatible with the Migrator/Versioning?
When nhibernate/fluent is in charge of generating a database, I do not necessarily need to define every thing aspect of the tables. Its done either via convention or via the mapping files. With the migrators I would need to define this level of detail?
Lots of questions here. I'll answer the questions with a focus on FluentMigrator.
Is it a way to initially create then maintain updates for a database
by way of versioning.
FluentMigrator is a way to version control your database schema. Everyone does it in some way. Either manually, with sql scripts, with a tool like SqlCompare or a Visual Studio Database project. All these methods are easy to mess up. It is so easy to make a mistake when releasing a new version and cause the system to crash. Migrations is a better way to handle this.
FluentMigrator allows you to define a change to the schema as code and this is usually checked in to your source control with the other code changes. Meaning that you can say version 1.XX of your system should have version 123 of the database. It means if you roll back your code to the previous version you also know what version of the database to rollback to as well.
It can be used both to create the database schema from the beginning or to start with version control of the schema for an existing database.
A Migration is a way to describe a change to your database schema. FluentMigrator creates a VersionInfo table and stores the unique id (version number) of the Migration after is has been applied.
For example, if I have two Migrations one with Id 1 and one with Id 2. If then I execute the first Migration then Id 1 will be stored in the VersionInfo table and I can look there and know that the version of the database is 1 and that version 2 has not been applied yet.
Being able to know which version the database schema is very useful when pushing changes from Test to Production or if you have multiple copies of the database in Production. For example, I have a customer with offices all around the world and each office has their own copy of the database and all of them are on different versions. Without knowing the database version it would be very difficult to update them safely.
Most of the time I do not need to actually look in the VersionInfo table, FluentMigrator handles this automatically. It compares the assembly with Migrations to the VersionInfo table and figures out which changes have not been applied yet and then executes those.
The first migration (or initial version of the database) would contain
all the tables, relationships and properties required (done either
fluently or using a chunk of sql in a script).
The starting point is up to you. You can have a first migration that is an sql script that you have generated from the current database. You could could also use one of the contrib projects like FluentMigrator.T4 to generate a Fluent Migration. Or you could just decide that the existing database is the starting point and save a copy of it to be able to restore it as version 1.
I have introduced FluentMigrator to a lot of legacy databases without any major problems.
When you want to push a change to a database, you would create a new
migration method (Up and Down), something like add a new table or
modify a field.
Yes, Up is used to apply the change specified in the Migration and Down rolls it back. So Up could be to create a table and Down could be to drop the table.
To deploy one of these migrations, you would use a command line
specifying the dll containing the migration, the connection string and
the required version.
There are three runners available to execute migrations. The command line runner, the Nant task and the MSBuild task. There are usually executed as part of a build script.
The MigrationRunner class can also be used in code. You might do this if you wanted to build your own runner or if you have other needs (like building databases dynamically or automatically updating the database if a new migration is added.)
If you had a rather complex set of data models, wouldn't it be rather
difficult and time consuming to create a migration definition for all
of that?
I have mostly answered this already. It is usually quite easy to generate an sql script for a database. For Sql Server it takes less than a minute to generate the script even for large databases. This script can be saved in a .sql file and executed as the first migration using the Execute.EmbeddedSqlScript expression. It works a treat.
I know with nHibernate/fluent you can easily generate tables for a
database without having to define anything other than the models and
map files. Is there a way to make this configuration compatible with
the Migrator/Versioning?
At the moment, there is no such integration and in practise I, at least, don't miss it. There was some discussion about connecting Fluent NHibernate and FluentMigrator but it would be a lot of work. It would enable scaffolding to generate changes to the model like EF Code First migrations do. It's not on the roadmap at the moment however.
When nhibernate/fluent is in charge of generating a database, I do not
necessarily need to define every thing aspect of the tables. Its done
either via convention or via the mapping files. With the migrators I
would need to define this level of detail?
Yes, you would need to define at that level of detail. FluentMigrators' migrations are a DSL (own little language) for defining schema changes that are translated to sql. You can write sql directly as well using the Execute.Sql expression. Entity Frameworks migrations have that sort of integration which has both advantages and disadvantages.
Check out the wiki or one of the tutorials here, here (part 1) or here (part 2) for more help getting started.
I have an existing application with a SQL database that has been coded using a database first model (I create an EDMX file every time I have schema changes).
Some additional development (windows services which support the original application) has been done which uses EF POCO/DbContext as the data layer instead of an EF EDMX file. No initializer settings were ever configured in the DbContexts, but they never modified the database as the DbSet objects always matched the tables.
Now, I've written a seperate application that uses the existing database but only its own, new tables, which it creates itself using EFs initializer. I had thought this would be a great time to use EF Code First to handle managing these new tables. Everything worked fine the first time I ran the application, but now I am getting this error from some of my original EF POCO DbContexts (which never used an initializer).
The model backing the 'ServerContext' context has changed since the
database was created. Consider using Code First Migrations to update
the database
After some investigation, I've discovered that EF compares a hash of its schema with some stored hash in the sql server somewhere. This value doesn't exist until a context has actually used an initializer on the database (in my case, not until the most recent application added its tables).
Now, my other DbContexts throw an error as they read the now existing hash value and it doesn't match its own. The EF connection using the EDMX doesn't have any errors.
It seems that the solution would be to put this line in protected override void OnModelCreating(DbModelBuilder modelBuilder) in all the DbContexts experiencing the issue
Database.SetInitializer<NameOfThisContext>(null);
But what if later on I wanted to write another application and have it create its own tables again using EF Code first, now I will never be able to reconcile the hash between this theoretical even newer context and the one that is causing the issue right now.
Is there a way to clear the hash that EF stores in the database? Is EF smart enough to only alter tables that exist as a DbSet in the current context? Any insights appreciated.
Yes, Bounded DB contexts is actually good practice.
eg a base context class, to use common connection to DB, each sub class, uses the
Database.SetInitializer(null); as you suggest.
Then go ahead and have 1 large context that has the "view of the DB" and this context is responsible for all migrations and ONLY that context shoudl do that. A single source of truth.
Having multiple contexts responsible for the DB migration is a nightmare I dont think you will solve.
Messing with the system entries created by code first migrations can only end in tears.
Exactly the topic you describe I saw in A julie Lerman video.
Her suggested solution was a single "Migration" context and then use many Bounded DB contexts.
In case you have a pluralsight account:
http://pluralsight.com/training/players/PsodPlayer?author=julie-lerman&name=efarchitecture-m2-boundedcontext&mode=live&clip=11&course=efarchitecture
What EF version are you using? EF Code First used to store hash of the SSDL in the EdmMetadata table. Then in .NET Framework 4.3 thingh changed a little bit and the EdmMetadata table was replaced by __MigrationsHistory table (see this blog post for more details). But it appears to me that what you are really looking after is multi-tenant migrations where you can have multiple context using the same database. This feature has been introduced in EF6 - (currently Aplpha2 version is publicly available) Also, note that EdmMetadata/__MigrationHistory tables are specific to CodeFirst. If you are using the designer (Model First/Database First) no additional information is stored in the database and the EF model is not checked whether it matches the database. This can lead to hard to debug bugs and/or data corruption.
I am working on a project which uses Entity Framework 4.1 for persisting our various objects to the database (code first).
I am testing in Visual Studio with a local SQL Express DB, and our Jenkins server deploys committed code to a testing server. When this happens I temporarily change my local connection string to point to the testing DB server and run a unit test to re-create the test database so that it matches our latest entities, etc.
I've recently noticed our testing server is giving this error:
The model backing the 'EntityFrameworkUnitOfWork' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data.
This is usually an indication that our code has changed and I need to run the unit test to re-create the database. Except I just did that! I don't believe there is anything wrong with our deployment process - the DLLs on the test server seem to be the same versions as in my local environment. Are there any other settings or environment factors that can cause this error about the model having changed since the database was created?
I'm new here - thanks for any help!
The error you see means that the model hash stored in EdmMetadata table is different from the model hash computed from the model in the application. Because you are running database creation from a different application (your dev. application) it is possible that those two differ. Simple advice here is: don't use different applications for database creation and instead let your main application create the database (either automatically or for example with some admin interface).
As another option you should be able to turn off this check completely by removing the convention responsible for these checks:
modelBuilder.Conventions.Remove<IncludeMetadataConvention>();
Model hash computation is dependent on current entities in your application (any simple change result in different model hash) and on database server versions / manifest. For example a model deployed on SQL server 2005 and 2008 will have different model hash (Express vs. Full or 2008 vs. 2008 R2 should not result in different model hash).
This can happen due to reflection ordering differences across different platforms. To verify, you can use the EdmxWriter API to compare the EDMX from both environments. If any of the tables have different column ordering, then this is the issue.
To workaround, you can change the way your test database gets updated such that it is updated from your test server rather than your local box.
We are going to fix this issue in the next release.
In the code-first approach, the SSDL is generated during the execution of the code. One of the informations included in the generated SSDL is the name of the provider used in the DbConnection. As you said, you're connecting to different databases engines, so you must use two different providers. This completly changes the output of the hashing function.
The below code was extracted from the EntityFramework assembly:
using (XmlWriter writer = XmlWriter.Create(output, settings))
{
new SsdlSerializer().Serialize(database, providerInfo.ProviderInvariantName, providerInfo.ProviderManifestToken, writer);
}
This might help and the link to Scott G blog will sure be a solution to your problem check this question link
Edit 1: this is the link to Scott G blog
Edit 2: You may also check this if you use a database first on integration server
Edit 3: This is a more detailed answer like the one from Scott G
Are the two servers running your application running different operating systems (or service packs?) It appears the SHA256CryptoService used can throw a PlatformNotSupportedException which causes it to fallback to another method.
http://msdn.microsoft.com/en-us/library/system.security.cryptography.sha256cryptoserviceprovider.sha256cryptoserviceprovider.aspx
// System.Data.Entity.Internal.CodeFirstCachedMetadataWorkspace
private static SHA256 GetSha256HashAlgorithm()
{
SHA256 result;
try
{
result = new SHA256CryptoServiceProvider();
}
catch (PlatformNotSupportedException)
{
result = new SHA256Managed();
}
return result;
}
You may be able to test this by using reflection to invoke the following 2 (internal/private) methods on each server.
MetaDataWorkspace.ToMetadataWorkspace(DbDatabaseMapping, Action<string>)
CodeFirstCachedMetadataWorkspace.ComputeSha256Hash(string xml);
Entity Framework code first creates a table called EdmMetadata. It keeps a hash of your current model. Once you run the application EF checks if the model being used is the same as the model that the db 'knows about'.
If you want to perform database migration, I suggest you use EF Code first migrations though it's still an alpha.
If you don't want to use migrations you can either:
handle the schema change manually - that means moving the content of the EdmMetadata table to the test server along with all the changes
or
set the db initializer to DropCreateDatabaseIfModelChanges (or better something derived from it and use the Seed() method to write the initial data). To set the initialzer either call Database.SetInitializer() on application start or use the appSettings
<add key="DatabaseInitializerForType Fully.Qualified.Name.Of.Your.DbContext," value="Fully.Qualified.Name.Of.The.Initializer" />
I only accidentally renamed my .mdf file and got this error. So look also for this.
Situation
I'm creating a C#/WPF 4 application using a SQL Compact Edition database as a backend with the Entity Framework and deploying with ClickOnce.
I'm fairly new to applications using databases, though I don't suspect I'll have much problem designing and building the original database. However, I'm worried that in the future I'll need to add or change some functionality which will require me to change the database design after the database is already deployed and the user has data in the database.
Questions
Is it even possible to push an updated database design out to users via a clickonce update in the same way it is for code changes?
If I did, how would the user's data be affected?
How is this sort of thing done in real situations? What are some best-practices?
I figure that in the worst case, I'd need to build some kind of "version" number into the database or program settings and create some routine to migrate the user's current version of the database to the new one.
I appreciate any insight into my problem. Thanks a lot.
There are some 'tricks' that are employed when designing databases to allow for design changes.
Firstly, many database designers create views to code against, rather than coding directly to the tables. This allows tables to be altered (split or merged, etc) while only requiring that the views are updated. You may want to investigate database refactoring techniques for this.
Secondly, you can indeed add versioning information to the database (commonly done as a 'version' table with a single field). Updating the database can be done through code or through scripts. One system I worked on would automatically check the database version and then progressively update the schema through versions in code until it matched the required version for the runtime. This was quite an undertaking.
I think your "worst" case is actually a pretty good route to go in this situation. Maintain a database version in the DB and have your application check and update the DB as necessary. If you build your updater correctly, it should be able to maintain the user's data. Depending on the update this might involve creating temporary tables to hold the existing data and repopulating new versions of the tables from them. You might be able to include a new SDF file with the new schema in place in the update process and simply transfer the data. It might be slightly easier that way -- you could use file naming to differentiate versions and trigger the update code that way.
Unfortunately version control and change management for databases is desperately, desperately far from what you can do with the rest of your code.
If you have an internal-only environment there are a number of tools which will help you (DBGhost, Red Gate has a newish app, some deployment management apps) but all of them are less than full solutions imho, but they are mostly good enough.
For client-shipped solutions you really don't have anything better than your worst case I'm afraid. Just try and design with flexibility in mind - see Dr.Herbie's answer.
This is not a solved problem basically.
"Smart Client Deployment with ClickOnce" by Brian Noyes has an excellent chapter on this issue. (Chapter 5)
ISBN 978-0-32-119769-6
He suggests something like this:
if(ApplicationDeployment.CurrentDeployment.IsFirstRun) {
MigrateData();
}
private void MigrateData() {
string previousDb = Path.Combine(ApplicationDeployment.CurrentDeployment.DataDirectory, #".\pre\mydb.sdf");
if(!File.Exists(previousDb))
return;
string oldConnString = #"Data Source=|DataDirectory|\.pre\mydb.sdf";
string newConnString = #"Data Source=|DataDirectory|\mydb.sdf";
//If you are using datasets perform any migration here, with the old and new table adapters.
//Otherwise use an .sql data migration script.
//Store the version of the database in the database, and check that in the beginning of your update script and GOTO the correct line in the SQL script.
}
A common solution is to include a version number somewhere in the database. If you have a table with miscellaneous system data, throw it in there, or create a table with one record just to hold the DB version number. Then whenever the program starts up, check if the database version is less than the expected version. If so, execute the required SQL CREATE, ALTER, etc, commands to bring it up to speed. Have a script or function for each version change. So if you see the database is currently at version 6 and the code expects version 8, execute the 6 to 7 update and the 7 to 8 update.
Another method we used on one project I worked was to ship a schema-only, no data database with the code. Every time you installed a new version the installer would also install the latest copy of this new blank database. Then when the program started it up it would compare the user's current database schema with the new database schema, and determine what database changes were needed on the fly. Like, if in the "reference schema" table Foo had a column named Bar, and there was no column Bar in the user's current database, we would generate a "alter table Foo add Bar ..." and execute it. While writing the first draft of the program to do this was a fair amount of work, once we'd done it there was pretty much zero maintenance to keep the DB schema up to date. The conversion was just done on the fly.
Note that this scheme doesn't handle DB changes that require changing data values, like if you add a new column that must be initially populated by doing some computation on data from other tables or some such. But if you can generate new data from old data, that must mean that the new data is redundant and your database is not normalized. I don't think the situation ever came up for us.
I had the same issue with an app in Android with an SQLite database adding a table. I changed the name of the database to include a version extension, like: theDataBaseV1, deleted the previous one and the app works fine.
I just changed the name of the database and the name in this line of code
private static final String DATABASE_NAME = "busesBogotaV2.db";
in the DBManager when its going to open.
Does anybody knows if this trivial solution has any unintended consequences?