I'm looking into using FluentNHibernate as my ORM but I'm wondering how best to write the database deployment section. I've prior experience with Entity Framework and code first migrations, but I'm curious about FluentNHibernate so I'm playing around. I've looked at FunnelWeb Blog but I haven't got the time, yet, to really get to grips with how they're doing it, so I'm after some help with a very specific part of the project, which I hope someone can summarise.
My Visual Studio solution is separated in the following assemblies (I'm going to look into using an IoC like Autofac, but that will come later):
Solution
Domain Assembly
DatabaseDeployment Namespace
Model Namespace
Web UI
So, in my domain assembly I'd like to have a DBMigrator class who's contract will look something like this:
public interface IDBMigrator
{
bool NeedsUpdating();
void UpdateDatabase();
}
Now, in my WebUI assembly (which is an MVC3 project) I would like to do something like the following (in the global.asax):
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);
var migrator = new DBMigrator();
if (migrator.NeedsUpdating())
{
migrator.UpdateDatabase();
}
}
This migrator will using .sql scripts (so I can have my database version controlled by Subversion) and likely have a table in the database that tells me what scripts have been executed against the schema, and what scripts haven't yet. So, to summarise I suppose my actual questions are:
Am I on the right track to version controlling (and automating) my database deployment?
At what point should I be checking for (and applying) schema changes to my project, should this really be automated or developer instigated?
I'm looking at some frameworks now such as DbUp and MigratorDotNet but I'd like to get my strategy right in my head first, before I go and adopt a framework for this. How do you handle your database migrations?
Take a look at FluentMigrator
http://nuget.org/packages/FluentMigrator
I couldn't find a specific tool that did what I wanted, so have made my own following the convention in my question. If anyone else comes across this and wants to use it, they can grab it from here:
NuGet Package Manager link
User Guide
Related
I have found this blogpost: http://blog.bitdiff.com/2012/05/sharing-common-view-model-data-in.html?showComment=1499113088147#c5286707438454380796 about sharing strongly-typed common view model data in asp.net mvc, and it look to me like it would solve of the problems I have with keeping track of some user related data across views.
I’m a complete novice when it comes to DI and Unity as I have never used it before, but I have an understanding of the benefits of using it. The post is from May 2012 but should as far as I can see still be valid, perhaps with some small changes.
I’m using C#, MVC 5, EF, Code First, Migrations, Unity V4.01 and Unity.MVC V4.01 with VS2015 Community Edition.
I have followed the blogpost from start to near finish (lacking the test) and all compiles. I have one problem though, this line causes problems:
GlobalFilters.Filters.Add(container.Resolve<LayoutModelAttribute>(), 1);
As far as I can tell the right place to call place the line is in the FilterConfig.cs file in the App_start folder where I’ve done this:
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new HandleErrorAttribute());
var container = UnityConfig.GetConfiguredContainer();
filters.Add(container.Resolve<LayoutModelAttribute>());
}
The container.Resolve gets a squiggly line beneath it and the project won’t compile. The error description is this:
“The non-generic method 'IUnityContainer.Resolve(Type, string, params
ResolverOverride[])' cannot be used with type arguments”
I’ve tried some other places also, but the result is the same, I now lean towards the notion that Unity itself has changed since 2012, and now must be resolved in a different way.
I’m pretty sure I’m doing something wrong and that it properly is because the blogpost is from 2012 and Unity now works in a different way. Due to my lack of experience with Unity, I’m unable to figure out how to change the line of code that won’t compile or to what extend and how to refactor the blogpost if necessary.
I’m hoping someone out there can point me in the right direction.
It seems you are using container without it having a capability of injecting. Have a look at the following answer
Simple Injector property injection on action filter
In my project, I trying to implement repository pattern and unit of work.
I found some web site to describe how to implement it such as:
http://www.codeproject.com/Articles/688929/Repository-Pattern-and-Unit-of
http://www.codeproject.com/Articles/561584/Repository-Pattern-with-Entity-Framework-using
I was wondering, why is not generic Unit of Work and Repositories Framework? then try several search on internet and I found it,
http://genericunitofworkandrepositories.codeplex.com/
This framework is first code but my project is model first therefore is not work correctly?
Could you please suggest me model first framework like this?
My project is a internet web site with one database, If there is plausible reason I can change model first approach to code first approach.
Thanks for you time.
We've abstracted all the interfaces in our latest release into Repository.Pattern project https://genericunitofworkandrepositories.codeplex.com/SourceControl/latest#main/Source/Repository.Pattern, in plans to implement nHibernate provider. You are more than welcome to start implementing these interfaces, based on bandwidth at the moment, I cannot commit to any dates as of yet.
I'm relatively new to EntityFramework and really want to get into testing things before I get too much further into things and have a huge codebase to retrospectively write tests for. I've not used it much and so methods are fairly basic, like below;
public Employee GetEmployee(int employeeID)
{
using (DatabaseContext db = new DatabaseContext())
{
return db.Employees.SingleOrDefault(e => e.idEmployee == employeeID);
}
}
This is fine in my app, but in my test project, it doesn't work because the test project doesn't seem to read the app.config file and so there's no connection string for DatabaseContext to use. I've read a bit about testing, nothing seems really definitive, though this post is the "official" way to do things (it's linked to from MSDN. The post seems fairly involved though and would require me to do things a lot differently than what I currently am, unless I've misunderstood some of it?
Could someone help clear this up for me? I can't even cheat and copy app.config across to the test project, it still doesn't read it (I've also tried renaming to MyApp.exe.config and still no luck). Is my GetEmployee method wrong? Should I do something more like the linked post? Or is there some way to test that I've not found yet?
#FizzBuzz - here is another article the discusses how to setup your unit test projects to work with entity framework:
How to get Entity Framework to read my app.config file in Unit Test project
You can read one approach for integration testing here.
About the config issue setting Copy to Output Directory property to Copy Always should do the work.
There are to options to resolve the issue you are facing.
Option 1: Create a mock for the app.config values. for mocking you can use Rhino Mock
Option 2: In your Unit Test project : Right click on the project > Add > Existing Item >
Select File > Add it as a link.
If you don't want to go with your live database (and right so!), then you basically have two options:
Use another database (must be of the same kind as your live db,
since EF doesn't allow for changing the db system, only its
location) and add another app.config to your unit test project
(which is the same as in your live project except that the db
connection string is different).
Use the NDbUnit framework, which
allows for defining the data in xml files. Here also you'll need an
individual app.config for your test project, if you don't want to hardcode the test data connection string. (This approach is only
advisable, if your live db has no or only very few schema changes,
because NDbUnit is quite allergic to these.) I wrote a blog post (with sample solution) about this approach here.
A third approach would be to mock all EF stuff, but this quickly gets overly complicated (you can find this also in a previous part of the above mentioned post, if you're interested).
HTH
Thomas
Thanks for the information people, found some interesting hints and tips through the various links supplied. In the end, I tried out this article from MSDN, funnily enough! Though it says it's for EF6, it does actually work for previous versions. The reason it indicates EF6 is for async stuff.
I'm using AutoMapper in a number of projects within my solution.
These projects may be deployed independantly, across multiple servers.
In the documentation for AutoMapper it says:
If you're using the static Mapper
method, configuration only needs to
happen once per AppDomain. That means
the best place to put the
configuration code is in application
startup, such as the Global.asax file
for ASP.NET applications.
Whilst some of the projects will be ASP.net - most of these are class libraries / windows services.
Where should I be configuring my mappings in this case?
The idea that it should only be required once per AppDomain stays the same, as far as I can tell. I always perform my mappings upon the initialization of the program itself. While I am not using AutoMapper I am using an IoC library (Windsor) which requires a mapping of sorts and this is done from my program.cs file. So when the application loads it performs the mapping and because the resolver is static and in a shared library it is available globally.
I don't know if this answers your question or not, but essentially every app has an entry point and if you need your mappings immediately after entry then the entry is the best place to put them.
I've elected to store my mappings in separate classes for each project so that they are reusable.
protected void Application_Start()
{
RegisterMaps();
}
private void RegisterMaps()
{
WebAutoMapperSettings.Register();
BusinessLogicAutoMapperSettings.Register();
}
This way I can easily call BusinessLogicAutoMapperSettings.Register() if I were to reuse only my BusinessLogic dll in another application or webservice
All great stories they always start with those 4 magical words... I have inherited a system... no wait! that isn't right!
Anyway with my attempt at humour now passed I have not so much been given more I have to support an existing service.
There are many many issues when it comes to using this service, as an example one is to create a record of a person, you need to call 4 different parts of the services.
So, getting together with my Manager we decided that we need to stick another layer on top to add a facade to the common requests, to simplify the number things and the correct order to do them when creating a new site.
My question starts here if anyone wants to avoid the above waffle
So I want to use TDD on the work I am doing, but the service I have inherited (which will become our Data layer) has been strongly coupled with a database connection string located in a specific connetionstring node in the Web.Config.
Problem I have is, to decouple the service from the Config file will take weeks of my time which I do not have.
So I have had to add and App.Config file with the expected node into my Test project.
Is it ok to do this, or should I start investing some time to decouple the database config from the datalayer?
I agree that you should probably look into using Dependency Injection as your working your way through the code to decouple your app from the config, however, I also understand that doing that is not going to be an easy task.
So, to answer your question directly, no, there is nothing wrong with adding a config file to support your tests. This is actually quite common for unit testing legacy systems (legacy being an un-tested system). I have also, when left with no other option, resorted to utilizing reflection to "inject" fake configuration values into the ConfigurationManager in order to test code that is reading configuration values, but this is probably a last resort.
Try using Dependancy Injection to mock up your DataLayer.
In TDD you are not (necessarily) testing your datalayer and database but the BusinessLogic.
Some links:
SO best practices of tdd using c# and rhinomocks
TDD - Rhino Mocks - Part 1 - Introduction
You can use Dependency Injection to "untie" your code from web.config (or app.config for that matter):
http://weblogs.asp.net/psteele/archive/2009/11/23/use-dependency-injection-to-simplify-application-settings.aspx
As you mentioned Dependency Injection is the way to go. You also want to make sure that your consumers of your configuration object are not dependent on your specific configuration implementation such as the ConfigurationManager, ConfigurationSections, etc. To see a full example using custom configuration you can have a look at my blog post on Configuration Ignorance but basically it comprises of.
Your configuration implementation be it using a ConfigurationSection or an XmlReader should be based on an interface that way you can easily mock out your properties and easily change your implementation at a later date.
public BasicConfigurationSection : ConfigurationSection, IBasicConfiguration
{
...
}
To tackle how the the configuration is retried we use a configuration provider, the configuration provider for a particular configuration knows how to retrieve it's configuration
public interface IConfigurationProvider<TConfiguration>
{
TConfiguration GetConfiguration();
}
public class BasicConfigurationProvider : IConfigurationProvider<IBasicConfiguration>
{
public IBasicConfiguration GetConfiguration()
{
return (BasicConfigurationSection)ConfigurationManager.GetSection("Our/Xml/Structure/BasicConfiguration");
}
}
If you are using Windsor you can then wire this up to Windsor's factory facility.
Hope that helps.