Entity Framework with Federated Service implementation - c#

im not sure what im looking for, and everything that i've seen so far looks like it will work till I really dive into it. I just need some pointers from the brains here. Im working an ASP.NET MVC EF5 SQL2012 project. We have a model set that isn't code first (The entities were built using the designer) and as of right now, everything is working just fine. But, we have this setup script... (Convoluted as i've ever seen) and i need to get it into something more automated. Right now, the setup script pre-populates the tables with data. look ups, reference, etc. I'm looking for a way to automate this further, without having to run this script, and even more so. To generate the database and tables automatically. Every article i've read seems to do the trick (Migrations, seeding, etc.) but the one thing they don't take into consideration, we federate services. So the actual EDMX is on a WCF Dataservice 5.6. I have access to the models and what not but the WCF service exposes an DataServiceContext which doesn't have a seed on it. Am i looking at the right stuff here? or is the only option here to have this confounded setup script (All C# Driven). This website has been detrimental to this: http://www.entityframeworktutorial.net/code-first/seed-database-in-code-first.aspx as well as this: Auto Create Database Tables from Objects, Entity Framework but i don't see how i can use these over WCF 5.6.

The short answer is that Model-First doesn't give you a seed method because they want you to use a SQL script, but you have a few choices:
Use EF PowerTools (or VS2013's EF Designer) to generate the "Code-First" model from your DB. This will allow you to seed your DB, and have finer control over how everything operates under the hood.
Use a SQL script to seed with. Generally, if you make changes to your schema, you'll re-run your recreate DB script. Create a separate script to populate your DB and keep it handy. If you feel more comfortable in code than SQL, you can make a console app (or whatever type of app you want) and keep it up to date with your schema.
If all you need is seed data, and there is a good business case to expose a method to your service consumers, you can keep Model-First, create a stored procedure to seed your DB, and expose it as an EF function. You can then expose this in your WCF service
Personally, I tend towards designing the DB myself, using VS 2013's EF6 POCO generator, then using Code-First because of the better granular control that you get with real data classes. Then I do some cleanup work, write my seed methods, etc.

Related

C# Is there an easier way to create a database, empty tables, tables with data in them by default, stored procs and views?

Before I posted this question, I did some Googling first on how a database was created through C# and mostly it points to either SMO or SQL query files and it was the time of SQL Server 2005 and 2008.
So at this day in age, is there an easier way to create a database with empty tables, tables with data in them by default, stored procedures and views?
I need a suggestion.
I think the answer is probably Entity Framework. You can do 'code first' and use database migrations, allowing you to write your C# code and use that to generate a lot of the database for you.
Ultimately though, 'easier' is subjective. I personally find EF great for the 'normal' stuff, but at the end of the day, if you need a stored procedure to do some custom logic; you need to write the custom logic, in some fashion.
Maybe have a look and see if you think it fits your needs.
https://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/creating-an-entity-framework-data-model-for-an-asp-net-mvc-application
Looked at the database projects in studio 2013. You create a database as a series of scripts using a familiar GUI. However, changes are published - this process creates a unique change script targeting the connection you define. For new databases the whole thing gets created, but publish against a partial or out dated version and the script created in a change script to bring it up to date.
You can even writ unit tests against your database using specialist tools, although I do find them lacking a bit.
More on msdn - here
Depends. right out of gates. Sp and views. Best shot is directly from database through a workbench. I can then capture definitions and store in a file to be replayed through c#
As for tables there are many orms that can generate tables via c#. Look at entity frameworks. Code first examples
I have generated tables using EF Works fine. I then went into database and created views and sps.
The trick is to migrate new views and sps into your EF model U can google entity Frameworks code first ... Adding views and SPs.
Worst case is u create database all through database workbench. Create a script that an be played to recreate eveything. By running. Then use EF DATABASE first approach
In either case u end up with a good set of autogenerated code to manage CRUD and object management and an abstracted data model

Update Db in EF5 with MVC5

I am working on a project using Entity Framework 5 with MVC5. My Project is currently running.
I am trying to add a column in a table. But as we know that in EF when we add a field in model it drop and recreate the database, which i can`t do.
One method for this is found code migration. But my manager is not allow me to use that(because its a big database project).
Please help me and suggest something for it.
When I start using code first with Entity Framework, I was in the same situation as you. I was always running Update-Database -F and then watching all my tables get dropped and recreated, even for something as simple as renaming a field.
Versioning databases is hard, but it's much easier with named migrations (which I think it what you mean when you refer to code migrations). I know your boss is against the idea, but it's very flexible.
Essentially you run Add-Migration -Name xxx in your Package Manager Console and Entity Framework will scaffold a configuration class for you with the default commands (both for versioning Up() and Down()) it will execute when you Update-Database. If you don't like the commands, you can change them. You can even move data around if you need to (it's a bit fiddly though).
I think you have four options available to you;
Use code-first automatic migrations: This is what you have at the moment, and doesn't give you enough control over what happens when you update your database. It's good for getting started in the earlier stages of a project, but becomes unwieldy after production.
Use code-first named migrations: Gives you the control you need via Configurations - but your boss has prevented use from using it.
Use a database-first approach: Database First allows you to reverse engineer a model from an existing database. So if you need to make a change, you would change your database first, and then regenerate your models using EF. This is usually favoured by DBA's, but it may mean that you have reimplement some aspects of your existing project.
Dont use entity framework: It's possible that you could revert back to SQL queries, which your boss might accept and gives you the flexibility you need - but who needs that kind of pain?
Let me know if I can help further.

Strategies for replacing legacy data layer with Entity framework and POCO classes

We are using .net C# 4.0, VS 2010, EF 4.1 and legacy code in this project we are working on.
I'm working on a win form project where I have made a decision to start using entity framework 4.1 for accessing an ms sql db. The code base is quite old and we have an existing data layer that uses data adapters. These data adapters are used all over the place (in web apps and win form apps) My plan is to replace the old db access code with EF over time and get rid for the tight coupling between UI layers and data layer.
So my idea is to more or less combine EF with the legacy data access layer and slowly replace the legacy data layer with a more modern take on things using EF. So for now we need to use both EF and the legacy db access code.
What I have done so far is to add a project containing the edmx file and context. The edmx is generated using database first approach. I have also added another project that contains the POCO classes (by using ADO.NET POCO Entity Generator). I have more or less followed Julia Lerman's approach in her book "Programming Entity Framework" on how to split the model and the generated POCO classes. The database model has been set for years and it's not an option the change the table and the relationships, triggers, stored procedures, etc, so I'm basically stuck with the db model as it is.
I have read about the repository pattern and unit of work and I kind of like the patterns, but I struggle to implement them when I have both EF and the legacy db access code to deal with. Specially when I don't have the time to replace all of the legacy db access code with a pure EF implementation. In an perfect world I would start all over again with a fresh take one the data model, but that is not an option here.
Is the repository and unit of work patterns the way to go here? In order to use the POCO classes in my business layer, I sometimes need to use both EF and the legacy db code to populate my POCO classes. In another words, I can sometimes use EF to retrieve a part of the data I need and the use the old db access layer to retrieve the rest of the data and then map the data to my POCO classes. When I want to update some data I need to pick data from the POCO classes and use the legacy data access code to store the data in the database. So I need to map the data retrieved from the legacy data access layer to my POCO classes when I want to display the data in the UI and vice versa when I want to save data to the data base.
To complicate things we store some data in tables that we don't know the name of before runtime (Please don't ask me why:-) ). So in the old db access layer, we had to create sql statements on the fly where we inserted the table and column names based on information from other tables.
I also find that the relationships between the POCO classes are somewhat too data base centric. In another words, I feel that I need to have a more simplified domain model to work with. Perhaps I should create a domain model that fits the bill and then use the POCO classes as "DAO's" to populate the domain model classes?
How would you implement this using the Repository pattern and Unit of Work pattern? (if that is the way to go)
Alarm bells are ringing for me! We tried to do something similar a while ago (only with nHibernate not EF4). We had several problems running ADO.NET along side an ORM - database concurrency being a big one.
The database model has been set for
years and it's not an option the
change the table and the
relationships, triggers, stored
procedures, etc, so I'm basically
stuck with the db model as it is.
Yep. Same thing! The problem was that our stored procs contained a lot of business logic and weren't simple CRUD procs so keeping the ORM updated with the various updates performed by a stored procedure was not easy at all - Single Responsibility Principle - not a good one to break!
My plan is to replace the old db
access code with EF over time and get
rid for the tight coupling
between UI layers and data layer.
Maybe you could decouple without the need for an ORM - how about putting a service/facade layer infront of your UI layer to coordinate all interactions with the underlying domain and hide it from the UI.
If your database is 'king' and your app is highly data driven I think you will always be fighting an uphill battle implementing the patterns you mention.
Embrace ado.net for this project - use EF4 and DDD patterns on your next green field proj :)
EDMX + POCO class generator results in EFv4 code, not EFv4.1 code but you don't have to bother with these details. EFv4.1 offers just different API which does exactly the same (and it is only wrapper around EFv4 API).
Depending on the way how you use datasets you can reach some very hard problems. Datasets are representation of the change set pattern. They know what changes were done to data and they are able to store just these changes. EF entities know this only if they are attached to the context which loaded them from the database. Once you work with detached entities you must make a big effort to tell EF what has changed - especially when modifying relations (detached entities are common scenario in web applications and web services). For those purposes EF offers another template called Self-tracking entities but they have another problems and limitations (for example missing lazy loading, you cannot apply changes when entity with the same key is attached to the context, etc.).
EF also doesn't support several features used in datasets - for example unique keys and batch updates. It's fun that newer MS APIs usually solve some pains of previous APIs but in the same time provide much less features then previous APIs which introduces new pains.
Another problem can be with performance - EF is slower then direct data access with datasets and have higher memory consumption (and yes there are some memory leaks reported).
You can forget about using EF for accessing tables which you don't know at design time. EF doesn't allow any dynamic behavior. Table names and the type of database server are fixed in mapping. Another problems can be with the way how you use triggers - ORM tools don't like triggers and EF has limited features when working with database computed values (possibility to fill value in the database or in the application is disjunctive).
The way of filling POCOs from EF + Datasets sounds like this will not be possible when using only EF. EF has some allowed mapping patterns but possibilities to map several tables to single POCO class are extremely limited and constrained (if you want to have these tables editable). If you mean just loading one entity from EF and another entity from data adapter and just make reference between them you should be OK - in this scenario repository sounds like reasonable pattern because the purpose of the repository is exactly this: load or persist data. Unit of work can be also usable because you will most probably want to reuse single database connection between EF and data adapters to avoid distributed transaction during saving changes. UoW will be the place responsible for handling this connection.
EF mapping is related to database design - you can introduce some object oriented modifications but still EF is closely dependent on the database. If you want to use some advanced domain model you will probably need separate domain classes filled from EF and datasets. Again it will be responsibility of repository to hide these details.
From how much we have implemented, I have learned following things.
POCO and Self Tracking objects are difficult to deal with, as if you do not have easy understanding of what goes inside, there will be number of unexpected behavior which may have worked well in your previous project.
Changing pattern is not easy, so far we have been managing simple CRUD without unit of work and identity map pattern. Now lot of legacy code that we wrote in past does not consider these new patterns and the logic will not work correctly.
In our previous code, we were simply using transactions and single insert/update/delete statement that was directly sent to database assuming transactions on server side will take care of all operations.
In such conditions, we were directly dealing with IDs all the time, newly generated IDs were immediately available after single insert statement, however this is not case with EF.
In EF, we are not dealing with IDs, we are dealing with navigation properties, which is a huge change from earlier ADO.NET programming methods.
From our experience we found that only replacing EF with earlier data access code will result in chaos. But EF + RIA Services offer you a completely new solution where you will probably get everything you need and your UI will very easily bind to it. So if you are thinking about complete rewriting using UI + RIA Services + EF, then it is worth, because lot of dependency in query management reduces automatically. You will be focusing only on business logic, but this is a big decision and the amount of man hours required in complete rewriting or just replacing EF is almost same.
So we went UI + RIA Services + EF way, and we started replacing one one module. Mostly EF will easily co-exist with your existing infrastructure so there is no harm.

Dealing with Schema Updates in nHibernate/Fluent nHibernate after Deployment

In writing an application that runs on Fluent Nhibernate/Nhibernate, something has me a bit concerned. I suppose this would be true of any ORM (and even without using an ORM), but what is the ... I guess the word is 'field of study' that relates to the best practices and methods for updating a database after deployment?
In nHibernate, I establish a SessionFactory and have an initial run where it writes the database out based on the mappings. That's fine and good, I can even write the database out manually. But what about when my client comes back and wants something new added? Can I append to the database without losing my data? I am completely new to all of this and it has been troubling me since the start of this project, and I really do not know what direction to go to make sure I can manage the program after it is deployed.
I have looked at other stack overflow questions that I could find regarding this topic - one of which did not even have an accepted answer (though the question itself was kind of vague), but I did discover the tool http://www.red-gate.com/products/sql-development/sql-compare/ from the question
Tool to upgrade SQL Express database after deployment though I am wondering just how good of a 'strategy' that is.
There are a couple of options, use the AutoMapping feature in Fluent NHibernate to minimize the mapping code you write. If your schema changes comply with the AutoMap conventions then you only need to work with the corresponding domain object changes.
Another less optimal option is to take a database first approach and have something like MyGeneration automatically generate the domain classes and NHibernate mapping files from the schema. This works if you have complete control of the database schema and it can be made to implement a good domain model design (both conditions which very rarely ever happen...)
In either approach, these tools can help handle the database scripting needed to "migrate" the schema changes to a new version
from my experience, after deployment you have to manually keep your db structure up-to-date.
that means that whenever you add / change your db structure, you do so using a script with DDL commands.
when you're ready to deploy, you just run those DDL scripts against your production db.
for example, if you add a 'bar' column to your 'foo' table, your script would be something like (pseudo-code):
ALTER TABLE foo ADD COLUMN 'bar' int(32) not null default(0);

Dynamic N-Layer with ASP.NET

I'm trying to build a web application that let the administrator talk to the database through C# and add new tables and columns to fit his requirements (sort of a very simple database studio) but I'm not trying to just create some spaghetti application.
So I'm trying to figure out how to let those things dynamically (automatically) when he creates a table and use the table to build them :
1- The business objects or entities (the classes, it's objects and properties).
2- The Data access layer (some simple methods that connects to the database and add, update, delete retrieve items (objects)).
Is this possible ? any pointers on how to achieve it ?
EDIT
just opened your link!! .. it's talking about the data bound controls and stuff! .. my question is way more advanced than that!.
when you build an N-Layered application you start with the database schema and implementation and it's easy to do programtically then you start building the DAL classes which (add, edit, etc in other words the CRUD operations) in and form this database
what I want to do is to allow the web administrator to choose add the new table through my application and then -dynamically- the application would take the tables names and columns as parameters and create new classes and define within them the CRUD methods that will implement the SQL CRUD operations
then it would also create dynamically the classes and define within them the variables, properties and methods to call and use the DAL methods .. all this based on the table, column names
NOTE : All this happens on the run-time!
You might want to look into ASP.Net Dynamic Data. It's a RAD tool which very easily gives you CRUD functionality for your entities and more. Check it out.
Sometime back I had also asked similar question on SO. I got only one reply.
Today I was digging some information on MSDN and as I had guessed it, MS CRM entity model works based on metadata. So basically whatever a CRM developer is working against is just metadata, they are not real objects as such. Following is the MSDN link.
Extend MS CRM Metadata and here is the MS CRM 4.0 SDK.
I hope this should get you started.
Update: Recently hit upon Visual Studio LightSwitch. I think this is what we wanted to build. A UI which will pick up table information from DB and then create all CRUD screens. VS LightSwitch is in its Beta1 and has quite a lot of potential. Should be a nice starting point.
First, any man trying to create MS Access is doomed to recreate MS Access. Badly.
You are better off using ASP.NET Dynamic Data (as suggested) or ASP.NET MVC Scaffolding. But runtime-generated playforms that actually make decent applications are really pipe dreams. You will need developer time to do anything complex. Or well.
What you are asking is non-sense. Why? Because the idea behind BLL and n-tier is that you know your data model well, and can create a static class model to represent your data model.
If your data model is dynamic, and changing, then you cannot create a static BLL (which is what a BLL is). What you will have to do dynamically build your queries at run-time. This is not something that any of the traditional methods are designed to handle, so you must do everything yourself.
While it's possible to dynamically generate classes at run-time, this is probably not the approach you want to take, because even if you manage to make your BLL adapt to your dynamic database.. the code that calls the BLL will not know anything about it, thus it will never get called.
This is not a problem you will solve overnight, or by copying any existing solution. You will have to design it from scratch, using low level ADO calls rather than relying on ORM's or any automation.

Categories