Our legacy application is using EntitySpaces for database access but since ES is EoL for several years and is causing some performance issues on our application we're thinking about switching over to EntityFramework.
Is there an easy way to do this without completely rewriting all of our extension classes?
EntitySpaces is alive again and the API has been updated and is far more streamlined. It's a single DLL Nuget install too. What kind of performance issues, I'd love to hear about the issues
https://mikegriffinreborn.github.io/EntitySpaces/
I've been thinking about this for a while. I don't think there is going to be a simple solution to move away from EntitySpaces and move to EntityFramework. But the approach I would, and probably eventually will take, is if you haven't already, add an interface to each of the methods in your Business logic that inherits from the data classes and include every method you need.
Now, add a Database First EF model and create new business-logic classes for each entity/model to inherit from that interface you've made. Then you know every method that requires re-writing in EntityFramework(LINQ/Lambda). It's a slow process but this way you can do the migration over multiple release windows, slowly moving everything over, referencing the new EF business-logic models as-and-when you have time (and of course, any new tables can use EF straight away).
Related
I need to access some data in an existing sql-database and publish it using a REST-Service (using Webapi).
In my previous, very small project I just accessed the EF-Context directly from my controllers, create some DTO's from my EF-Entities and return it to the caller. That was simple and I got it working really fast.
Now, the project is not much bigger but I want to do it 'the right way' this time as everyone is talking about a layering architecture, even for a small project, so unit-testing etc. is much easier.
Being a newbie on this (yes, I need to read more books) I decided to start reading tons of blog-posts about architectural design of an application and so on.
First thing was to get in touch with the various techniques on accessing the data in the database using EF (I'm using v6.2, DB-First). Some say, you need a repository for each entity, some say, create a generic repository and others say, repositories are the new evil, avoid them at all cost.
Some blog-posts I've read:
generic-dal-using-entity-framework
is-the-repository-pattern-useful-with-entity-framework
repositories-on-top-unitofwork-are-not-a-good-idea
why-entity-framework-renders-the-repository-pattern-obsolete
favor-query-objects-over-repositories
and so on.
Even some others say, that separating your EF-generated POCO's should be separated from the 'pure' EF-stuff like the EDMX: splitting-entity-framework-model-classes-separate-projects
Some of the posts are old and may be obsolete but I'm just struggling on what is the best way to accomplish my task.
Right now, I have 4 Projects:
DGO.Core: Containing my DTO's
DGO.Data: Containing my EF-Stuff and 1 Repository-Class (see below for details).
DGO.Service: Referencing DGO.Data and accessing the methods exposed by the repository-class.
DGO.Webapi: Referencing all three DLL's but using the methods from the Service-Dll.
I need to reference the Data-Dll to be able to inject the data-repository.
So now my db-queries reside in the Data-DLL (in the so-called repository-class) which creates filled DTO's from my Core-DLL. These DTO's are then passed to the Service-DLL, which might process some logic here and there and then this DTO is passed to the Webapi-Controller.
Is this a common approach on passing these DTO's thru all layers?
Or is it better to split the POCO's from the EDMX and use these directly in my service-layer.
So the direction will be 'Data-Layer' -> 'POCOs' -> 'Service-Layer' -> 'DTOs' -> Client (Controller etc.).
And where should the queries take place? I think, in the Data-Layer but some say, it should be done in the service-layer. I think, the data-layer is responsible for querying the data and the service-layer is responsible for 'working' with the data.
Hope, I made my problems clear. Could provide code if it's necessary.
Thanks!
I just spent some time resolving the strangest bug, and I can't figure out why it happened. Maybe someone can follow along and shed some light on why...?
tl;dr: lightweight MembershipContext fails due to some completely unrelated object not having and ID field, but only does so when running under MVC: all integration tests that make sure this stuff works correctly... they work correctly.
What I Have:
I have a solution with a number of projects. The pertinent ones are:
LPA.Domain (contains POCOs for business objects/behavior)
LPA.Data (contains EF6 stuff with FluentAPI config and references LPA.Domain)
LPA.Web (contains an MVC project, refs Data and Domain)
This is a very normal setup for us and the project has been in development for some time with no issues.
Within the Data project I have two DbContexts that are used. One, MembershipContext is used only during login and bypasses the user-based audit system to validate the user. The MembershipContext has only three entities: User, UserMembershipDetail and UserRole.
The second, CoreContext has all the normal stuff (including tracking the currently logged in user ID for audit purposes). This has many entities.
What I Did:
Now let's pretend I spend a few days doing some domain-level development. I add handful of classes, set up their db counterparts, build the models (in CoreContext, MembershipContext doesn't change), write integration tests for the models and unit tests for the domain work. All is well. I haven't touched MembershipContext at all.
Part of what I did was add a new class to LPA.Domain to deal with an object called Matter (legal matter). This was to mock up a single method I needed elsewhere, but was not set up in the db (yet). I added an Ignore() on the appropriate entity and will flesh it out later. No biggie, right? All still happening in the CoreContext
What Happened:
I get done with a few days of domain-level development, then go to fire up the LPA.Web project and hit a strange error when trying to log in. Some digging around tells me basically this:
When trying to get from Users in MembershipContext, it's throwing an exception saying that the LPA.Data.Matter EntityType has no key.
A couple confusing things about this:
LPA.Data.Matter is not an object. The Matter object resides in the LPA.Domain.Legal namespace. Presumably this is an internal thing with EF6 as it creates it's own object to deal with things. Ok, I can buy that...
MembershipContext makes no use of Matter at all. The three objects it does make use of have no downstream references (shallow or deep) to the Matter object.
Integration Testing to build/retrieve the MembershipContext.Users works fine! (Users being the offending Entity in the exception) I think this must indicate some different method between how the LPA.Web project is loading the LPA.Data.MembershipContext as opposed to how the test project is, but that doesn't make much sense to me...
This Matter object is used in testing for the domain project and the LPA.Data.CoreContext tests without issue, why does it fail on the Membership and only when using it through an MVC project?
How I Fixed It:
As noted, I added the Matter class as a quick helper on the domain but hadn't fleshed out the storage stuff yet. As such, I had no ID field on the object. The resolution was as simple as adding an ID field.
This resolves the problem, but does not in any way help me understand what went wrong.
For what it's worth, in the MVC project we use a custom user and role provider based on the ExtendedMembershipProvider, which uses a custom repository (that in turn uses the MembershipContext). The instantiation of the MembershipContext is identical between the test and the MVC projects. I can't imagine why they'd act differently.
It's as if something in the MVC project is forcing EF6 to read/validate every object in the Domain class to ensure their validity to a non-existing data model.
Does anyone have the slightest idea why this happened? I'm baffled, myself.
this post is meant to have a list of suggestions on MVVM approach... What tools do you use, what do you do to speed up development, how do you maintain your application, any special ways of finding defects in this design pattern......
here is what I do:
So first i create my model/db with EF.
Then I create Views (either user controls or windows) with their respective viewmodel. I usually place the viewmodel in the same location as my view. But while starting the name of my view with "UC-name", I call my viewmodel just "name-model".
In my viewmodel I implement InotifyPropertyChanged
in my xaml view I have a resource of my viewmodel, and bind my grids/controls via the itemsource to the staticresource.
I try to do a lot of front end logic with triggers and styles and also place some code in the xaml.cs file if it regards logic for behaviour of my controls.
I can reach my viewmodel from my view (xaml + xaml.cs).
for communiation between viewmodels I use MVVM lights.
that's pretty much it.
Things I'm thinking about
I'm thinking of using T4 templates for generating viewmodel/view. What do you guys think of this. is this worth it?
when using MVVM light Messenger, we get a subscription based communication, and sometimes I find it hard to track what has changed in my DataContext. Any suggestions on this?
any other improvements or suggestions are more than welcome !
Answering first question regarding View/ViewModel generation I think for CRUD cases it makes sense to use some tools, otherwise it won't be that beneficial.
Pretty nice basic scaffolding implementation example you can find here: WPF CRUD Generator. Also WPF solution by DevExpress looks really promising.
There are at least couple Codeplex projects addressing View/ViewModel generation:
WPF Scaffolder
ViewModel Tool by Clarius
But I am quite pessimistic about T4 for such scenarios. I think writing and polishing own T4's will take you much more time than adoption of existing tools.
Regarding MVVMLight messenger I can say that it will take you some time to get used to it. And as soon as you will understand difference between "regular" and message driven MVVM you'll be able to use it in most efficient way. Very nice article about messenger is Messenger and View Services in MVVM. And want to add a really important quote from there:
A Word of Caution About Messenger
Messenger is a powerful component that can greatly facilitate the task
of communication, but it also makes the code more difficult to debug
because it is not always clear at first sight which objects are
receiving a message. Use with care!
I'm very much a proponent of Domain-Driven Development (DDD). First I have the designer write specifications, roughly adhering to the methodologies in Behavior-Driven Development (BDD). This then forms the basis of unit tests for Test-Driven Development (TDD), for which I use NUnit. For the domain layer itself I start with an Anemic Domain Model i.e. entity classes containing mostly properties and virtually no methods; there are plenty of arguments both for and against this but personally I find it works well. Coupled with this is the Business Logic Layer (BLL) which knows only about the domain entities.
For the Data Access Layer (DAL) I prefer NHibernate, it supports all the usual things you would expect like lazy loading and repository management etc but particularly important is the Object Relational Mapping (ORM) i.e. the bit that translates between your domain entities and the underlying database representation.
One of the problems with NHibernate, in my opinion, is that it uses XML files to do the mapping for the ORM. This means two things: first is that any errors you introduce won't get picked up until run-time. Secondly it's not really a proper "solution" to ORM at all, instead of writing mapping classes you just wind up writing XML files. Both of these problems can be solved by using Fluent. Fluent solves the first problem by replacing XML files with C# files, so your mapping declarations are now done in code which will usually pick up errors at compile-time. It solves the second problem by providing an auto-mapper, which looks at your entities and generates the necessary mapping files automatically. This can be manually overridden if and where needed, although in practice I find I seldom need to. Since the auto-mapper uses reflection is does tend to be a bit slow but it can be run in an offline utility and then saved to a configuration file that is loaded at run-time for near-instant start-up; the same utility can also be used to create your database automatically. I've used this tech with MySql, MS Server and MS Server CE...they've all worked fine.
On the other side of the tier is your view model. I've seen a lot of projects create an almost 1:1 mapping of domain entities to view model classes, I may infuriate MVVM purists by saying this but I really don't see the point in doing all that extra work for something that isn't really needed. NHibernate allows you to provide proxies for the classes it creates, using Castle Dynamic Proxy you can set an intercepter to your NHibernate session factory that automatically injects INotifyPropertyChanged notification to all of your entity properties so that they work with the WPF binding mechanism. Another product, uNhAddIns, allows you to replace any lists with an ObservableCollection in order to get INotifyCollectionChanged support (for reasons I won't go into you can't just put an ObservableCollection into your entities without it seriously affecting performance).
If you're designing and building your application properly using technologies like these and fully unit-testing along the way then you're going to need some way of handling Inversion of Control (IoC) so that you aren't passing object references around all over the place, and for that you'll need a dependency injection framework. My personal preference is Ninject but Unity is pretty good too. Dependency injection is particularly important for good database session management (so that all relevant objects reference the same session), a good rule is one session per WPF form or one per web request.
There are lots of other little things I use to make life easier (MVVM Lite, log4net, Moq for mocking objects for unit testing etc) but this is my core architecture. It takes a while to set up but once you've got it all going you can build fully functional database applications in literally minutes without any of the headaches traditionally associated with layer management in tiered enterprise applications...you just create your domain entities and then start coding for them. Your schema is created automatically, your database is created automatically, you can use your entity classes to fill your database for immediate stress testing and you have full WPF support without having to pollute your entity classes with code or attributes not actually related to the domain. And since all development is driven by anemic domain entities your data is already in the perfect format for serialization into html/ajax/soap etc when you want to give your app web capablities.
You'll notice that I haven't discussed the presentation/XAML layer, mainly because that part is now straightforward. Using a decent architecture you can actually create a fully working and tested application that then only needs pure XAML added to turn it into a releasable product.
I generated a database first ObjectContext using EF 4.3 in VS 2010. Then I used this class (and relating classes) in a Windows.Forms application.
This time want to use the same application with minor additions to some forms and an additional table (and a FK to it) to create a new application. Since I want to manage both projects at the same time, I created a new solution for second application, which subclasses the necessary forms and classes.
But I don't know how to use this technique for the ObjectContext I generated before. If I use an automatically generated new ObjectContext this will be a new class, therefore I have to recompile all the two solutions whenever I apply a change (I don't even mention the necessary assembly reference changes).
Manually creating a subclass of the aforementioned ObjectContext is not possible I guess if I don't do csdl/mdl/ssdl tricks manually.
I want to avoid creating an interface class between my code and the ObjectContext, because of the changes needed and lack of time to achieve.
Does anybody have an idea?
It seems like you misunderstanding some concepts in software development process.
This:
Since I want to manage both projects at the same time, I created a new
solution for second application, which subclasses the necessary forms
and classes
is a very bad idea.
In fact, new functionality brings you to new version of your software.
Do not inherit anything. Just make a development branch (in terms of source control) for your previous project version, and continue to develop new version, extending functionality.
This will allow you to get two, completely independent versions of your software, which you will support simultaneously.
Platform: C# 2.0
Using: Castle.DynamicProxy2
I have been struggling for about a week now trying to find a good strategy to rewrite my DAL. I tried NHibernate and, unfortunately, it was not a good fit for my project. So, I have come up with this interaction thus far:
I first start with registering my DTO's and my data mappers:
MetaDataMapper.RegisterTable(typeof(User)):
MapperLocator.RegisterMapper(typeof(User), typeof(UserMapper));
This maps each DTO as it is registered using custom attributes on the properties of the DTO essentially:
[Column(Name = "UserName")]
I then have a Mapper that belongs to each DTO, so for this type it would be UserMapper. This data mapper handles calling my ADO.Net wrapper and then mapping the result to the DTO. I however am in the process of enabling deep loading and subsequently lazy loading and thus where I am stuck. Basically my User DTO may have an Address object (FK) which requires another mapper to populate that object but I have to determine to use the AddressMapper at run time.
My problem is handling the types without having to explicitly go through a list of them (not to mention the headache of always having to keep that list updated) each time I need to determine which mapper to return. So my solution was having a MapperLocator class that I register with (as above) and return an IDataMapper interface that all of my data mappers implement. Then I can just cast it to type UserMapper if I am dealing with User objects. This however is not so easy when I am trying to determine the type of Data Mapper to return during run time. Since generics have to know what they are at compile time, using AOP and then passing in the type at run time is not an option without using reflection. I am already doing a fair bit of reflection when I am mapping the DTO's to the table, reading attributes and such. Plus my MapperFactory uses reflection to instantiate the proper data mapper. So I am trying to do this without reflection to keep those expensive calls down as much as possible.
I thought the solution could be found in passing around an interface, but I have yet to be able to implement that idea. So then I thought the solution would possibly be in using delegates, but I have no idea how to implement that idea either. So...frankly...I am lost, can you help, please?
I will suggest a couple of things.
1) Don't prematurely optimize. If you need to use reflection to instantiate your *Mappers, do it with reflection. Look at the headache you're causing yourself by not doing it that way. If you have problems later, than profile them to see if there's faster ways of doing it.
2) My question to you would be, why are you trying to implement your own DAL framework? You say that NHibernate isn't a good fit, but you don't elaborate on that. Have you tried any of the dozens of other ORM's? What's your criteria? Your posted code looks remarkably like Linq2Sql's mapping.
Lightspeed and SubSonic are both great lightweight ORM packages. Linq2Sql is an easy-to-use mapper, and of course there's Microsoft's Entity Framework, which has a whole team at Microsoft dealing with the problems you're describing.
You might save yourself a lot of time, especially maintenance wise, by looking at these rather than implementing it yourself. I would highly recommend any of those that I mentioned.