Dynamically regenerate model when DB changes? - c#

Is it possible to have entity framework dynamically add/remove tables/columns in its DbContext without recompiling the project ? The use case is EF inside a GUI app and the DB schema may be changed behind the scenes over the lifetime - we don't want the GUI app to be recompiled on every DB change - it should just visually show the new table classes (i.e. Schema) as well as a few controls like type, property name etc.

It's technically not possible if your GUI app works with your database without some detached "api" - real web api, wcf service, etc. Even if you disable automatic migration for project and will manage to make your application think, that database is up-to-date (which can probably happen with some magic, dynamic DLL compilation, etc.), you have a big possibility of getting something funky - changed FK, PK, restrictions, constraints, data types, etc.. This will cause unexpected behavior for Entity Framework, and bring only grief.
Only in this case you can do it without affecting real GUI app - you just re-roll external project and voila - it works, if you properly set up your DTOs and methods. Otherwise, if your API changes uncontrollably, you will get DTO mismatch problems, which you will have to handle with versioning, etc, etc.
It's overall a bad idea to change model backing your application in any casewithout recompilation, since C# is not a dynamic language.

Entity Framework does not do that. It would be custom solution, pretty expensive.

Related

How to use Entity Framework code-first with a newer database model version

Maybe an odd question, but we have a scenario in which we want to use Entity Framework code-first in an environment which could have a database with a newer/higher version than the code itself.
Let me elaborate a bit. We have a couple of solutions which all use a core assembly which contains the overall datamodel which all solutions are using. The solutions are mainly sites and apps which are deployed to several different Azure Web Sites. So the solution are running next to each other. The only thing they are sharing is the Azure database.
Now the scenario will come in play. When we update the database model in the core assembly and update one of the solutions in Azure. The underlying database will be updated when the model is loaded within that solution. No problem there, works like a charm...
The problem starts when one of the other solutions is loaded. These other solution are still using the previous core assembly which has now an outdated EF CF model compared to the database model they are connecting with. So a nice exception will be throw as shown below.
The model backing the '{NAME}' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data.
The question is whether we can force the model just to load and ignore the changes made within the database. We internally have a policy to only apply not breaking changes within the database, so the model should be able to load without any problems.
Thanks in advance for the information and tips!
I can be wrong(not sure whether I remember correctly), but if it doesn't interferes with your application configuration, you can set DB initializer to null:
public PortalDbContext()
: base("name=PortalConnectionString")
{
Database.SetInitializer<PortalDbContext>(null);
}
Or it could be possible to create custom initializer:
public class BlogContextCustomInitializer : IDatabaseInitializer<BlogContext>
{
public void InitializeDatabase(BlogContext context)
{
if (context.Database.Exists())
{
if (!context.Database.CompatibleWithModel(true))
{
// Do something...
}
}
}
}
If you're using EF Code-First, the model must match the database.
Even if you found a way to circumvent that limitation you'd be doing something dangerous.
Let me expalin it: if you update the database from "Solution A", the model in "A" will match the database, and any further changes to the model in this solution can be applied to the database without any problem at all. That's right!. However, if you do what you're asking in this question, i.e. you do something so that "Solution B" can keep working even if the model doesn't mathc with the DB, and then you make a change to the model in "Solution B", how do you apply it? how can "Solution B" know what changes to apply? how can "B" determine what changes made by "A" should be left as they are, and what are the new changes made by "B" that must be applied to the database?
If you could follow on like this, you'd finish with two different code first models, none of which matches the database, and, besides, how could you warranty that both applications work correctly? how can you ensure that changes on "A" doesn't affect code on "B" and viceversa?
The safest solution to avoid this problem is to share the assembly containing the code first model between both solutions. Any other solution will be troublesome sooner or later. Perhaps you'll have to refactor your solutions so that they can share the same DbContext. The DbContext must be the only thing in your project. I usually have an Entities project, and a DbContext project which has a reference to Entities. Then both solutions would have references to these projects. These projects can be in one of the solutions, or in a completely different solution. Of course in one, or both solutions, you'll have to add a reference to the DbContext assembly, instead of the project, and keep it updated, for which you can use post-build scripts. In this way, when you recompile your solutions you'll also detect incompatible changes made for one solution which adversely affects the other.
EF6 supports several different DbContexts in the same database, so, if each of your applications had a different, non conflicting DbContext, you wouldn't have a problem. I cannot check it right know, but I think that the name of the DbContext must be different in each solution (I don't remember if the namespaces are taken into account). By non conflicting I mean that they refer to different database objects (tables, views, or whichever), of that the objects refered to by both contexts are not changed (for example master tables).

Mapping Tool like EF Designer but for Data objects?

In trying to separate my domain layers and GUI and looking into all the different ways to do that, one thing that I keep asking is why is this so difficult? Why all the extra code for data obejcts and then all the extra mapping of properties copying values in and out etc. Shouldn't theere be an easier way?
Then I remeembered when i used to wite small littler db app using MS Access and, Access has the concept of a Dynaset, basically a Dynaset is a View, just like an SQL Server View, except it is an updateable view. So, a MS Access form would be based of the View/Dynaset and therefore would not have to know the details of all the individual tables involved. Sounds like the Data objects pattern to me. Now, since Access has had this for 2 decades, shuoldn't there be a similar Dynaset, View, Mapping tools for Entity Framework, one that abstracts away the entities from the presentation? Is there one I am not aware of? 3rd party?
Thoughts on this?
If I understand you correctly, you may be looking for Entity Framework with POCO entities. You can find templates for them in the online gallery for templates (when you Add New Item in the project). Alternatively you can use right-click in your .edmx design view, select "Add code generation item" and pick the Fluent Generator.
These methods create multiple files instead of the default all-in-one EF generated file. One such file is the DbContext (as opposed to ObjectContext), one contains only entities (in the form of regular C# objects, no attributes or anything, just plain objects) and the last contains generated mapping in the form of fluent rules.
In this phase you can de-couple the entities file from its template and move it to another assembly. And voila, you have entities independent on the EF infrastructure. You can just pass the context these entities like you would before, and it'll do mapping by itself.
Alternatively you can use tool like AutoMapper, but you'll have to provide the mapping manually, which is a lot of work, but may be good in some cases.
Good design requires work. If it was easy, everyone would do it automatically. After all, everyone wants to do the least amount of work possible.
All the things you are complaining bout are part of the good design process, and there is no getting around them if you want a good design.
If you want to take shortcuts, then by all means, skip them. It's your code. nothing requires you to do things any specific way.
Access can do a lot of things because it's a desktop application, not a web application. Web applications are fundamentally different from desktop applications in how you design them, how they work, and what issues you face with them. For instance, the fact that you have a stateless environment and cannot keep result set from request to request makes many of the things people take for granted in Access impossible to do in a web app.
Specifically, if you want to use views, you can do so. Views are updateable if they are properly designed, but typically require update statements that only affect one table in the view). EF can work with views as well, but it has a lot of quirks you must deal with.
The data mapper pattern has emerged as a common pattern in web design because it's the easiest and straight forward way to have clean separation of concerns between layers and/or tiers. I suggest you find ways to make them work within your development process.
It may also be that MVC is not the most appropriate framework for you to use. It sounds more like you want to build Web apps the way you did Acceess, in which case Visual Studio Lightswitch may be a better choice for you.
http://msdn.microsoft.com/en-us/library/ff851953.aspx

Strategies for replacing legacy data layer with Entity framework and POCO classes

We are using .net C# 4.0, VS 2010, EF 4.1 and legacy code in this project we are working on.
I'm working on a win form project where I have made a decision to start using entity framework 4.1 for accessing an ms sql db. The code base is quite old and we have an existing data layer that uses data adapters. These data adapters are used all over the place (in web apps and win form apps) My plan is to replace the old db access code with EF over time and get rid for the tight coupling between UI layers and data layer.
So my idea is to more or less combine EF with the legacy data access layer and slowly replace the legacy data layer with a more modern take on things using EF. So for now we need to use both EF and the legacy db access code.
What I have done so far is to add a project containing the edmx file and context. The edmx is generated using database first approach. I have also added another project that contains the POCO classes (by using ADO.NET POCO Entity Generator). I have more or less followed Julia Lerman's approach in her book "Programming Entity Framework" on how to split the model and the generated POCO classes. The database model has been set for years and it's not an option the change the table and the relationships, triggers, stored procedures, etc, so I'm basically stuck with the db model as it is.
I have read about the repository pattern and unit of work and I kind of like the patterns, but I struggle to implement them when I have both EF and the legacy db access code to deal with. Specially when I don't have the time to replace all of the legacy db access code with a pure EF implementation. In an perfect world I would start all over again with a fresh take one the data model, but that is not an option here.
Is the repository and unit of work patterns the way to go here? In order to use the POCO classes in my business layer, I sometimes need to use both EF and the legacy db code to populate my POCO classes. In another words, I can sometimes use EF to retrieve a part of the data I need and the use the old db access layer to retrieve the rest of the data and then map the data to my POCO classes. When I want to update some data I need to pick data from the POCO classes and use the legacy data access code to store the data in the database. So I need to map the data retrieved from the legacy data access layer to my POCO classes when I want to display the data in the UI and vice versa when I want to save data to the data base.
To complicate things we store some data in tables that we don't know the name of before runtime (Please don't ask me why:-) ). So in the old db access layer, we had to create sql statements on the fly where we inserted the table and column names based on information from other tables.
I also find that the relationships between the POCO classes are somewhat too data base centric. In another words, I feel that I need to have a more simplified domain model to work with. Perhaps I should create a domain model that fits the bill and then use the POCO classes as "DAO's" to populate the domain model classes?
How would you implement this using the Repository pattern and Unit of Work pattern? (if that is the way to go)
Alarm bells are ringing for me! We tried to do something similar a while ago (only with nHibernate not EF4). We had several problems running ADO.NET along side an ORM - database concurrency being a big one.
The database model has been set for
years and it's not an option the
change the table and the
relationships, triggers, stored
procedures, etc, so I'm basically
stuck with the db model as it is.
Yep. Same thing! The problem was that our stored procs contained a lot of business logic and weren't simple CRUD procs so keeping the ORM updated with the various updates performed by a stored procedure was not easy at all - Single Responsibility Principle - not a good one to break!
My plan is to replace the old db
access code with EF over time and get
rid for the tight coupling
between UI layers and data layer.
Maybe you could decouple without the need for an ORM - how about putting a service/facade layer infront of your UI layer to coordinate all interactions with the underlying domain and hide it from the UI.
If your database is 'king' and your app is highly data driven I think you will always be fighting an uphill battle implementing the patterns you mention.
Embrace ado.net for this project - use EF4 and DDD patterns on your next green field proj :)
EDMX + POCO class generator results in EFv4 code, not EFv4.1 code but you don't have to bother with these details. EFv4.1 offers just different API which does exactly the same (and it is only wrapper around EFv4 API).
Depending on the way how you use datasets you can reach some very hard problems. Datasets are representation of the change set pattern. They know what changes were done to data and they are able to store just these changes. EF entities know this only if they are attached to the context which loaded them from the database. Once you work with detached entities you must make a big effort to tell EF what has changed - especially when modifying relations (detached entities are common scenario in web applications and web services). For those purposes EF offers another template called Self-tracking entities but they have another problems and limitations (for example missing lazy loading, you cannot apply changes when entity with the same key is attached to the context, etc.).
EF also doesn't support several features used in datasets - for example unique keys and batch updates. It's fun that newer MS APIs usually solve some pains of previous APIs but in the same time provide much less features then previous APIs which introduces new pains.
Another problem can be with performance - EF is slower then direct data access with datasets and have higher memory consumption (and yes there are some memory leaks reported).
You can forget about using EF for accessing tables which you don't know at design time. EF doesn't allow any dynamic behavior. Table names and the type of database server are fixed in mapping. Another problems can be with the way how you use triggers - ORM tools don't like triggers and EF has limited features when working with database computed values (possibility to fill value in the database or in the application is disjunctive).
The way of filling POCOs from EF + Datasets sounds like this will not be possible when using only EF. EF has some allowed mapping patterns but possibilities to map several tables to single POCO class are extremely limited and constrained (if you want to have these tables editable). If you mean just loading one entity from EF and another entity from data adapter and just make reference between them you should be OK - in this scenario repository sounds like reasonable pattern because the purpose of the repository is exactly this: load or persist data. Unit of work can be also usable because you will most probably want to reuse single database connection between EF and data adapters to avoid distributed transaction during saving changes. UoW will be the place responsible for handling this connection.
EF mapping is related to database design - you can introduce some object oriented modifications but still EF is closely dependent on the database. If you want to use some advanced domain model you will probably need separate domain classes filled from EF and datasets. Again it will be responsibility of repository to hide these details.
From how much we have implemented, I have learned following things.
POCO and Self Tracking objects are difficult to deal with, as if you do not have easy understanding of what goes inside, there will be number of unexpected behavior which may have worked well in your previous project.
Changing pattern is not easy, so far we have been managing simple CRUD without unit of work and identity map pattern. Now lot of legacy code that we wrote in past does not consider these new patterns and the logic will not work correctly.
In our previous code, we were simply using transactions and single insert/update/delete statement that was directly sent to database assuming transactions on server side will take care of all operations.
In such conditions, we were directly dealing with IDs all the time, newly generated IDs were immediately available after single insert statement, however this is not case with EF.
In EF, we are not dealing with IDs, we are dealing with navigation properties, which is a huge change from earlier ADO.NET programming methods.
From our experience we found that only replacing EF with earlier data access code will result in chaos. But EF + RIA Services offer you a completely new solution where you will probably get everything you need and your UI will very easily bind to it. So if you are thinking about complete rewriting using UI + RIA Services + EF, then it is worth, because lot of dependency in query management reduces automatically. You will be focusing only on business logic, but this is a big decision and the amount of man hours required in complete rewriting or just replacing EF is almost same.
So we went UI + RIA Services + EF way, and we started replacing one one module. Mostly EF will easily co-exist with your existing infrastructure so there is no harm.

Dealing with Schema Updates in nHibernate/Fluent nHibernate after Deployment

In writing an application that runs on Fluent Nhibernate/Nhibernate, something has me a bit concerned. I suppose this would be true of any ORM (and even without using an ORM), but what is the ... I guess the word is 'field of study' that relates to the best practices and methods for updating a database after deployment?
In nHibernate, I establish a SessionFactory and have an initial run where it writes the database out based on the mappings. That's fine and good, I can even write the database out manually. But what about when my client comes back and wants something new added? Can I append to the database without losing my data? I am completely new to all of this and it has been troubling me since the start of this project, and I really do not know what direction to go to make sure I can manage the program after it is deployed.
I have looked at other stack overflow questions that I could find regarding this topic - one of which did not even have an accepted answer (though the question itself was kind of vague), but I did discover the tool http://www.red-gate.com/products/sql-development/sql-compare/ from the question
Tool to upgrade SQL Express database after deployment though I am wondering just how good of a 'strategy' that is.
There are a couple of options, use the AutoMapping feature in Fluent NHibernate to minimize the mapping code you write. If your schema changes comply with the AutoMap conventions then you only need to work with the corresponding domain object changes.
Another less optimal option is to take a database first approach and have something like MyGeneration automatically generate the domain classes and NHibernate mapping files from the schema. This works if you have complete control of the database schema and it can be made to implement a good domain model design (both conditions which very rarely ever happen...)
In either approach, these tools can help handle the database scripting needed to "migrate" the schema changes to a new version
from my experience, after deployment you have to manually keep your db structure up-to-date.
that means that whenever you add / change your db structure, you do so using a script with DDL commands.
when you're ready to deploy, you just run those DDL scripts against your production db.
for example, if you add a 'bar' column to your 'foo' table, your script would be something like (pseudo-code):
ALTER TABLE foo ADD COLUMN 'bar' int(32) not null default(0);

WCF can't serialize cyclic references

I have a database with a lots of relationships between Tables and a Silverlight client that connects to my server with WCF service on ASP.Net side.
First i used LINQ to SQL as a robust mapper tables to object and in a WebMethod that returns a List<Foo> of my Database's object(suppose GetFoo()). The Foo has lots of relationships with other objects that each of that have lots of realaships too,(this means , there is a PK and FK between tables).also i use Microsoft Service Trace Viewr for track my service
When i call GetFoo() , WCF returns this error:
Object graph for type 'X.Y.Z' contains cycles and cannot be serialized if
reference tracking is disabled
I searched this error and find this great post but that is not working properly and i see same error too.
Various options:
remove the cyclic dependencies from your model; this might be tricky for a generated model that has lots of existing code built against it, but is worth a try; however, you typically want to not serialize the parent, which is exactly what LINQ-to-SQL wants you to keep (it'll let you drop the children property, but that is what you usually want to serialize)
enable cyclic references; it looks like you've tried this without success; did you enable it at both ends, though? Actually I wouldn't be surprised if Silverlight doesn't like this extension (it has limited extension support)
use a separate (flat) DTO model for data transfer purposes
try using NetDataContractSerializer; I can't remember if this is supported in Silverlight, and I must admit I'm not its biggest fan, but it might be a pragmatic fix here
I'd vote firmly in the "DTO model" category; simply, having a separate model means you are less likely to run into tangles whenever you tweak the DB - and you are in complete control over it.
A bit late this. But if anyone are using linqtosql and have this problem you can simply just open the tables in your dbml class. Right click next to a table and click properties.
HEre there is a property named Serialization Mode.. Set it to Unidirectional
The error will be gone
I know this is an old question now, but did you try decorating the classes generated by your DBML with [DataContract(IsReference=True)]?
I had the same problem in 2010 and had to resort to some fairly extreme measures to get it to work on client and service sides, but recently went back through it with VS2013/.NET 4.5 and had much less pain, as documented here (with EF v6 RC 1 POCO objects): http://sanderstechnology.com/2013/more-with-the-entity-framework-v6-rc1/12423/

Categories