We work with a database that isn't ours to manage. We defined some views on it so that our code (C# with Entity Framework) can simply map to these views. These views will be given to the database owners later on. They will implement them so that we can use them.
During testing, we mock that database. We used Entity Framework Tools on a backup of the database, to generate a DbContext of it. We create it each time when running tests. Is this even the way to go?
During testing we would like to insert data in some tables, and read the results out of the related views. We do this to test the views as well as the C# code that will act upon it (there is non trivial logic in those views).
What is your opinion on this? Shouldn't we recreate the mock-database each time but use a static one with the views already defined? Should we define the views in C# so that they are created on the database? Is our approach completely wrong? How do others tackel similar problems?
"During testing, we mock that database...."
In EF version 6 or later you can also mock DBContext.
"During testing we would like to insert..."
Do you have the source code of the inherited DBContext? My recommendation to handle the views just like tables. Create a class for it(Entity) and in your applicationDBContext just map the View like a normal table.
public DbSet<ImAView> ImAViews { get; set; }
With database migration you can modifiy your view with normal T-SQL Script.
"What is your opinion on this? Shouldn't"
If you are testing the application (also data access layer in version 6) you can mock everything. Do not use database creation this will make your test very slow and is some cases the tests will be depends on other tests.
IF you want to test the database it self or migration or some bulk opeations then you have to create a database (you can do everything in SQL Server without C#) and feed it with data.
Related
I'm in the process of learning about Test Driven Development (TDD) so I can use it in a project I've started, but I've run into a question on how to set up a specific type of test I'd like to do.
The scenario is that I have a View that allows someone to edit information about a User (such as username, first name, last name, etc). In many scenarios this User they're editing will already exist in the database, so when they hit save the information gets updated in the database by that View's View Model.
What I'd like to test is that the View Model is saving this information to the database. This is done using an Entity Framework DbContext I pass into the View Model during construction, which means that I need to create a DbContext in the unit test to pass into the View Model that can be updated and compared against.
The Assert I'd like to test would be something along the lines of:
Assert.AreEqual(ViewModelFake.EditedUser.Username, DataBaseContextFake.Users.Find(1).Username);
After the DbContext is originally created by the unit test it populates it with a User, and later in the unit test that User information is changed. The command that's being tested in the View Model is responsible for saving this edited information into the database, replacing what the DbContext was originally populated with.
I've been searching for a solution for the better part of these last two days but haven't been able to track down examples of people doing the same thing. Is this something that should even be handled in a unit test? Please note that I'm not using a repository/unit of work layer on top of Entity Framework.
At work, in our project we created an abstraction over entity framework context.
Imagine that we have an interface, let's call it IMyCompanyContext and that it exposes all methods that you use from DbContext, including the SaveChanges (); method. All database-mapped collections are exposed through this interface using the ICollection interface. The Query provider understands our queries (it checks at runtime that behind that interface is a real DbContext). More than this, you can use an inversion of control aproach, maybe even using a DI Container for requesting a new IMyCompanyContext.
If you want to test this, you just have to implement your context with a bogus context that exposes some lists that you can query.
I've got an older ASP project that is being re-written into a responsive MVC4 w/bootstrap web application. The original version used direct SQL queries to interact with the application backend SQL database, and a custom class called EmployeeUserInfo that directly queries the ERP system for current employee information(current tasks, hours worked, etc). That connection is strictly read-only.
Using Entity Framework, it was extremely easy to create the models for the various writable databases that make up the application, but how can I add the original EmployeeUserInfo class to this EDMX model? I'd prefer not to rewrite the class because it does a lot of querying and analyzing to build the EmployeeUserInfo object. Is it possible to combine another class with an EF database-first generated model?
I'm stuck now because I can use the EmployeeUserInfo as the model for a view, but then I cannot access the EF generated model to read/write to the application database. I feel like I'm overlooking something here and making this more difficult than it needs to be.
im not sure what im looking for, and everything that i've seen so far looks like it will work till I really dive into it. I just need some pointers from the brains here. Im working an ASP.NET MVC EF5 SQL2012 project. We have a model set that isn't code first (The entities were built using the designer) and as of right now, everything is working just fine. But, we have this setup script... (Convoluted as i've ever seen) and i need to get it into something more automated. Right now, the setup script pre-populates the tables with data. look ups, reference, etc. I'm looking for a way to automate this further, without having to run this script, and even more so. To generate the database and tables automatically. Every article i've read seems to do the trick (Migrations, seeding, etc.) but the one thing they don't take into consideration, we federate services. So the actual EDMX is on a WCF Dataservice 5.6. I have access to the models and what not but the WCF service exposes an DataServiceContext which doesn't have a seed on it. Am i looking at the right stuff here? or is the only option here to have this confounded setup script (All C# Driven). This website has been detrimental to this: http://www.entityframeworktutorial.net/code-first/seed-database-in-code-first.aspx as well as this: Auto Create Database Tables from Objects, Entity Framework but i don't see how i can use these over WCF 5.6.
The short answer is that Model-First doesn't give you a seed method because they want you to use a SQL script, but you have a few choices:
Use EF PowerTools (or VS2013's EF Designer) to generate the "Code-First" model from your DB. This will allow you to seed your DB, and have finer control over how everything operates under the hood.
Use a SQL script to seed with. Generally, if you make changes to your schema, you'll re-run your recreate DB script. Create a separate script to populate your DB and keep it handy. If you feel more comfortable in code than SQL, you can make a console app (or whatever type of app you want) and keep it up to date with your schema.
If all you need is seed data, and there is a good business case to expose a method to your service consumers, you can keep Model-First, create a stored procedure to seed your DB, and expose it as an EF function. You can then expose this in your WCF service
Personally, I tend towards designing the DB myself, using VS 2013's EF6 POCO generator, then using Code-First because of the better granular control that you get with real data classes. Then I do some cleanup work, write my seed methods, etc.
I am working on a MVC4 project which will need to use a number of different databases, each with a few stored procedures for searching. The site is an asset search tool which needs to query various existing systems. If I allow the EF to generate models on its own, I will end up with a Model for each procedure I use in each database.
What I would prefer is to have my own POCO model already defined and the EF maps its results to that Model. So regardless of what database the data is taken from it maps back to that same Model. The column names in each database differ slightly so it would really need to be mapping columns to model properties.
There is no writing back to the database, it purely selects data out.
On the 'Edit Function Import' form I can create a model based on the results. There is also an option to view 'Function Import Mapping' but it does not appear to do what I am looking for.
Has anyone else tried this?
Added an image to help explain the issue
The closest to this I have managed so far is to have EDMX1 query 2 databases. This only works because they are on the same Db server. I had to fully qualify the Db names in the stored procedure. I could then use 1 EF Model as a return type for the 2 queries. That Model still is not usable in another EDMX though, so if I need to connect to a different Db server, I still cannot share the Model. So the problem is not solved.
Here is image of current progress.
Function Import Mapping is for mapping stored procedure / function calls to EF code. It's not really relevant here, unless you're using stored procs (which is not the way to go 90% of the time with EF - only use stored procs for more complex procedures).
An EF context, by its very nature, can only have a single database associated with it. You need to create multiple contexts in order to access multiple databases at once.
What I would do in your case is create a database-first schema (.edmx) file for each database, then write a service layer abstraction above it that allows you to flatten the data into your expected model. This is the kind of thing I do all the time, regardless of how many databases I'm working on at once. You've almost outlined this in your first diagram. The service layer may have multiple classes (for example, for a blog website you might have BlogService, UserService, CommentService etc), each of which contain methods that you call from you application layer.
I've put a quick diagram together that might help to explain
http://www.gliffy.com/go/publish/image/4818386/L.png
The service layer does all of your EF work, and your application layer (or business layer, whatever you want to call it) will do all of your business logic.
This setup lends itself well to TDD and Dependency Injection / IoC. Everything is neat and nicely separated.
We are using .net C# 4.0, VS 2010, EF 4.1 and legacy code in this project we are working on.
I'm working on a win form project where I have made a decision to start using entity framework 4.1 for accessing an ms sql db. The code base is quite old and we have an existing data layer that uses data adapters. These data adapters are used all over the place (in web apps and win form apps) My plan is to replace the old db access code with EF over time and get rid for the tight coupling between UI layers and data layer.
So my idea is to more or less combine EF with the legacy data access layer and slowly replace the legacy data layer with a more modern take on things using EF. So for now we need to use both EF and the legacy db access code.
What I have done so far is to add a project containing the edmx file and context. The edmx is generated using database first approach. I have also added another project that contains the POCO classes (by using ADO.NET POCO Entity Generator). I have more or less followed Julia Lerman's approach in her book "Programming Entity Framework" on how to split the model and the generated POCO classes. The database model has been set for years and it's not an option the change the table and the relationships, triggers, stored procedures, etc, so I'm basically stuck with the db model as it is.
I have read about the repository pattern and unit of work and I kind of like the patterns, but I struggle to implement them when I have both EF and the legacy db access code to deal with. Specially when I don't have the time to replace all of the legacy db access code with a pure EF implementation. In an perfect world I would start all over again with a fresh take one the data model, but that is not an option here.
Is the repository and unit of work patterns the way to go here? In order to use the POCO classes in my business layer, I sometimes need to use both EF and the legacy db code to populate my POCO classes. In another words, I can sometimes use EF to retrieve a part of the data I need and the use the old db access layer to retrieve the rest of the data and then map the data to my POCO classes. When I want to update some data I need to pick data from the POCO classes and use the legacy data access code to store the data in the database. So I need to map the data retrieved from the legacy data access layer to my POCO classes when I want to display the data in the UI and vice versa when I want to save data to the data base.
To complicate things we store some data in tables that we don't know the name of before runtime (Please don't ask me why:-) ). So in the old db access layer, we had to create sql statements on the fly where we inserted the table and column names based on information from other tables.
I also find that the relationships between the POCO classes are somewhat too data base centric. In another words, I feel that I need to have a more simplified domain model to work with. Perhaps I should create a domain model that fits the bill and then use the POCO classes as "DAO's" to populate the domain model classes?
How would you implement this using the Repository pattern and Unit of Work pattern? (if that is the way to go)
Alarm bells are ringing for me! We tried to do something similar a while ago (only with nHibernate not EF4). We had several problems running ADO.NET along side an ORM - database concurrency being a big one.
The database model has been set for
years and it's not an option the
change the table and the
relationships, triggers, stored
procedures, etc, so I'm basically
stuck with the db model as it is.
Yep. Same thing! The problem was that our stored procs contained a lot of business logic and weren't simple CRUD procs so keeping the ORM updated with the various updates performed by a stored procedure was not easy at all - Single Responsibility Principle - not a good one to break!
My plan is to replace the old db
access code with EF over time and get
rid for the tight coupling
between UI layers and data layer.
Maybe you could decouple without the need for an ORM - how about putting a service/facade layer infront of your UI layer to coordinate all interactions with the underlying domain and hide it from the UI.
If your database is 'king' and your app is highly data driven I think you will always be fighting an uphill battle implementing the patterns you mention.
Embrace ado.net for this project - use EF4 and DDD patterns on your next green field proj :)
EDMX + POCO class generator results in EFv4 code, not EFv4.1 code but you don't have to bother with these details. EFv4.1 offers just different API which does exactly the same (and it is only wrapper around EFv4 API).
Depending on the way how you use datasets you can reach some very hard problems. Datasets are representation of the change set pattern. They know what changes were done to data and they are able to store just these changes. EF entities know this only if they are attached to the context which loaded them from the database. Once you work with detached entities you must make a big effort to tell EF what has changed - especially when modifying relations (detached entities are common scenario in web applications and web services). For those purposes EF offers another template called Self-tracking entities but they have another problems and limitations (for example missing lazy loading, you cannot apply changes when entity with the same key is attached to the context, etc.).
EF also doesn't support several features used in datasets - for example unique keys and batch updates. It's fun that newer MS APIs usually solve some pains of previous APIs but in the same time provide much less features then previous APIs which introduces new pains.
Another problem can be with performance - EF is slower then direct data access with datasets and have higher memory consumption (and yes there are some memory leaks reported).
You can forget about using EF for accessing tables which you don't know at design time. EF doesn't allow any dynamic behavior. Table names and the type of database server are fixed in mapping. Another problems can be with the way how you use triggers - ORM tools don't like triggers and EF has limited features when working with database computed values (possibility to fill value in the database or in the application is disjunctive).
The way of filling POCOs from EF + Datasets sounds like this will not be possible when using only EF. EF has some allowed mapping patterns but possibilities to map several tables to single POCO class are extremely limited and constrained (if you want to have these tables editable). If you mean just loading one entity from EF and another entity from data adapter and just make reference between them you should be OK - in this scenario repository sounds like reasonable pattern because the purpose of the repository is exactly this: load or persist data. Unit of work can be also usable because you will most probably want to reuse single database connection between EF and data adapters to avoid distributed transaction during saving changes. UoW will be the place responsible for handling this connection.
EF mapping is related to database design - you can introduce some object oriented modifications but still EF is closely dependent on the database. If you want to use some advanced domain model you will probably need separate domain classes filled from EF and datasets. Again it will be responsibility of repository to hide these details.
From how much we have implemented, I have learned following things.
POCO and Self Tracking objects are difficult to deal with, as if you do not have easy understanding of what goes inside, there will be number of unexpected behavior which may have worked well in your previous project.
Changing pattern is not easy, so far we have been managing simple CRUD without unit of work and identity map pattern. Now lot of legacy code that we wrote in past does not consider these new patterns and the logic will not work correctly.
In our previous code, we were simply using transactions and single insert/update/delete statement that was directly sent to database assuming transactions on server side will take care of all operations.
In such conditions, we were directly dealing with IDs all the time, newly generated IDs were immediately available after single insert statement, however this is not case with EF.
In EF, we are not dealing with IDs, we are dealing with navigation properties, which is a huge change from earlier ADO.NET programming methods.
From our experience we found that only replacing EF with earlier data access code will result in chaos. But EF + RIA Services offer you a completely new solution where you will probably get everything you need and your UI will very easily bind to it. So if you are thinking about complete rewriting using UI + RIA Services + EF, then it is worth, because lot of dependency in query management reduces automatically. You will be focusing only on business logic, but this is a big decision and the amount of man hours required in complete rewriting or just replacing EF is almost same.
So we went UI + RIA Services + EF way, and we started replacing one one module. Mostly EF will easily co-exist with your existing infrastructure so there is no harm.