Do you see any benefit in injecting the database connnection string from the Global.asax.cs
class in ASP.NET MVC compared to the method in reading the connection string from a BaseDataProvider class accessing the app.config file?
I'd prefer to inject any objects needed using constructor injection (whenever possible).
One small advantage I see is transparency regarding a class's dependencies.
For example, if you try to instantiate a class in a test harness (while doing integration testing):
in the first case (constructor injection) you immediately see that it needs a connection string and provide one
in the second case you instantiate the class (perhaps using a default constructor) and after some trial & error discover that it depends on the ConnectionString property being set
Update:
Another advantage of the constructor injection approach is that it decouples the class itself from the mechanism of getting the connection string from the app.config.
This could enable in the future scenarios that you don't even think about right now.
For example, in a project I currently work on I have a component that has db access and I have reused it in several contexts. In some of them it uses a standard connection string coming from the config file, while in others I have another component that decides which connection string to use based on some conditions.
If you go for the second approach, you'll need to change the code in order to support such a functionality.
I usually take a hybrid approach such that my BaseDataProvider class has an empty constructor which defaults to whatever is stored in the config, but is overriden to accept a connString for cases where I need a connection other than the default.
Then my Global.asax class contains the necessary logic to determine what connection string they might need in a given situation. For example, say you have your web application deployed internationally on servers all over the world, you'd want to connect to the nearest available db server to avoid latency issues. So on user login, I would figure out where my user was and then set them up with the appropriate connection
Related
TLDR: What are the reasons for injecting a connection factory vs the IDbConnection itself.
I'm currently using Autofac in .net MVC to inject an instance of IDbConnection into my repository classes to use with Dapper like so:
Autofac setup:
builder.Register<IDbConnection>(ctx => new
SqlConnection(conSettings.ConnectionString)).InstancePerRequest();
Repo:
public ClientRepository(IDbConnection connection)
{
_connection = connection;
}
public async Task<IEnumerable<Client>> GetAsync()
{
string query = "SELECT * FROM Clients";
return (await _connection.QueryAsync<Client>(query)).ToList();
}
This has been working perfectly fine for me so far, but I'm a little worried about connections staying open and not being disposed of.
Every post I find on the topic ends in someone suggesting passing in a connection factory and calling it in a using statement, without really mentioning why my current setup is "bad".
As far as I can tell every request should get it's own IDbConnection where Dapper takes care of opening and closing the connection and Autofac takes care of the disposing.
Is this not the case? Am I missing something?
They way I'm doing this on an ASP.NET Core project (bear with me for a second, I know it's not what you're using but the concept still applies) is injecting the connection string through the repository constructor.
As you will see, I actually inject the IConfiguration object because I need other settings from the configuration file because of other requirements. Just pretend it's the connection string.
Then my repository looks like this (rough example, written off the top of my head so forgive any mistakes I might have made):
public class FooRepository
{
private readonly IConfiguration _configuration;
public FooRepository(IConfiguration configuration)
{
_configuration = configuration
}
private IDbConnection Connection => new SqlConnection(_configuration.GetConnectionString("myConnectionString"));
public Foo GetById(int id)
{
using (var connection = Connection)
{
return connection.QueryFirstOrDefault<Foo>("select * from ...", new {id});
}
}
}
ADO.NET connections are pooled, opening one as needed and then closing it is the way it's usually done. With using you make sure the connections gets closed and disposed - returned to the pool - as soon as you're done, even if an exception gets thrown.
Of course you might want to extract this common code to an abstract superclass, so that you won't need to repeat the name of the connection string in every repository, nor re-implement the Connection property.
Also, as I mentioned in my comment, Dapper is not in charge of opening or closing connections, in fact it fully expects the connection to be open before you can call any of its methods. This is no longer true, sorry.
If you only inject IDbConnection, that means your repository can only use that one connection and you are relying on the IoC to close/dispose of that connection for you. Also, if you need to connect to two different databases, you can't since you only allow one connection to be created here. If you want to run queries in parallel, you can't since you can only have one open call to a single database at a time. Finally, if getting your connection string is a bit harder and isn't straight from a config file (like a KeyVault), then you need to call an outside Async method or something that IoC won't let you do probably.
For me, I always use a factory because I want to close any connection as soon as I'm done with it instead of waiting for IoC to get rid of it. (It feels dirty to allow something outside of the repository to manage database connections.) I want control over which database I'm connecting to (I often have more than 1 DB I have to work with). I occasionally need to run a bunch of different queries in parallel in order to return all the data I need, so I need multiple connections in a single method. I also have to do some logic since we store our connection strings in Azure Key Vault, so I have to do an async call to get that with secret information, which gets a bit complicated, so the Create method on the factory ends up doing a lot of work.
I agree completely that the advice out there seems to be consistently in favor of injecting factories over injecting connections and that it's hard to find much discussion of why.
I think it's fine in straightforward cases to have your container inject the connection itself, which is simple and immediately alleviates 100% of this repetitive boilerplate: using var conn = _connFactory.Make().
But the advantage of having the container inject a factory instead is that it gives you more control over connection creation/lifetime. One common situation where this control really matters is when you want to use TransactionScope. Since it must be instantiated before any connections that you want to participate in the transaction, it doesn't make sense to use a DI container to create connections ahead of time. Daniel Lorenz brought up a couple of other situations where you might want fine control over connection creation in his answer.
If you decided to go the opposite direction (usually not recommended) by using DbConnection.BeginTransaction to manage transactions rather than TransactionScope, then you'd need to share your DbConnection/DbTransaction instances among all the queries/methods/classes involved in the transaction. In that case, injecting connection factories everywhere and letting classes do their own thing would no longer work. A reasonable solution would be to inject either a connection or a connection factory (doesn't really matter which) into a Unit-of-Work type of class and then inject that into your classes with the actual queries in them (Repositories or whatever). If you want to go deep into the weeds on this topic, check out this question.
In an MVC web application I want to override connection strings based on the development machine I'm using. I can use Web.config transformations, but I also need to override connection strings in various non-web config files. I can use the SlowCheetah extension, but then I will end up creating the same transformation for every project that accesses the database. This is a hassle to maintain when the project becomes bigger and has more developers.
What I would like to do is modify the way Entity Framework or ASP.NET look for connection strings, adding a class of my own that looks for connection strings, and only implement the transformation logic once. I would hopefully use Ninject to inject it only when relevant.
Is there such an "IConnectionStringProvider" interface I can implement and register, and automagically have ASP.NET and EF use it?
EDIT. I have found this, but it seems real nasty. If there's no cleaner way, I'll just use multiple identical configuration translations, and maybe let the source control system duplicate them properly.
You can tell Entity Framework to use a different connection string - it doesn't have to use the default one in web.config.
Here is an example: http://www.codeproject.com/Tips/234677/Set-the-connection-string-for-Entity-Framework-at
Here is another: http://msdn.microsoft.com/en-us/library/bb738533.aspx
It's up to you how you architect the rest of it.
Personally I use an app setting in web.config to tell my code which connection string to use for a particular part of the system, e.g.
var connectionStringNameForMyFeature = ConfigurationManager.AppSettings["connectionStringNameForMyFeature"];
myFeature.ConnectionString = ConfigurationManager.ConnectionStrings[connectionStringName];
So normally I just put my sql connection string in my asp.net web.config and reference it whenever I need to open a database connection, however this leaves me with referencing it all over my project. This also exposes my sql connection username and password in my web.config if it isn't encoded.
What are you best practices as far as keeping the connection methods in a class or class library? I saw a php tutorial that did this really well (but I can't find it again) and allowed for re-usability.
I would always keep the connection string in the web.config since the servers/database connections can always change, even if it's not common.
To make it more comfortable to view in code you can always add something like this :
String m_Connection = ConfigurationManager.AppSettings["MyConnectionString"];
and then just reference m_Connection everywhere.
I would also always encrypt the connection string using an EncryptionProvider.
Great MSDN article : How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI
I agree with #gillyb. In most cases the web.config is the place for the connection string(s). The other common alternative is a spring.Net config file if you make heavy use of dependedncy injection. The end result is the same except that the site will not rebuild if you change the Spring.config file, whereas it will if you change web.config.
All great stories they always start with those 4 magical words... I have inherited a system... no wait! that isn't right!
Anyway with my attempt at humour now passed I have not so much been given more I have to support an existing service.
There are many many issues when it comes to using this service, as an example one is to create a record of a person, you need to call 4 different parts of the services.
So, getting together with my Manager we decided that we need to stick another layer on top to add a facade to the common requests, to simplify the number things and the correct order to do them when creating a new site.
My question starts here if anyone wants to avoid the above waffle
So I want to use TDD on the work I am doing, but the service I have inherited (which will become our Data layer) has been strongly coupled with a database connection string located in a specific connetionstring node in the Web.Config.
Problem I have is, to decouple the service from the Config file will take weeks of my time which I do not have.
So I have had to add and App.Config file with the expected node into my Test project.
Is it ok to do this, or should I start investing some time to decouple the database config from the datalayer?
I agree that you should probably look into using Dependency Injection as your working your way through the code to decouple your app from the config, however, I also understand that doing that is not going to be an easy task.
So, to answer your question directly, no, there is nothing wrong with adding a config file to support your tests. This is actually quite common for unit testing legacy systems (legacy being an un-tested system). I have also, when left with no other option, resorted to utilizing reflection to "inject" fake configuration values into the ConfigurationManager in order to test code that is reading configuration values, but this is probably a last resort.
Try using Dependancy Injection to mock up your DataLayer.
In TDD you are not (necessarily) testing your datalayer and database but the BusinessLogic.
Some links:
SO best practices of tdd using c# and rhinomocks
TDD - Rhino Mocks - Part 1 - Introduction
You can use Dependency Injection to "untie" your code from web.config (or app.config for that matter):
http://weblogs.asp.net/psteele/archive/2009/11/23/use-dependency-injection-to-simplify-application-settings.aspx
As you mentioned Dependency Injection is the way to go. You also want to make sure that your consumers of your configuration object are not dependent on your specific configuration implementation such as the ConfigurationManager, ConfigurationSections, etc. To see a full example using custom configuration you can have a look at my blog post on Configuration Ignorance but basically it comprises of.
Your configuration implementation be it using a ConfigurationSection or an XmlReader should be based on an interface that way you can easily mock out your properties and easily change your implementation at a later date.
public BasicConfigurationSection : ConfigurationSection, IBasicConfiguration
{
...
}
To tackle how the the configuration is retried we use a configuration provider, the configuration provider for a particular configuration knows how to retrieve it's configuration
public interface IConfigurationProvider<TConfiguration>
{
TConfiguration GetConfiguration();
}
public class BasicConfigurationProvider : IConfigurationProvider<IBasicConfiguration>
{
public IBasicConfiguration GetConfiguration()
{
return (BasicConfigurationSection)ConfigurationManager.GetSection("Our/Xml/Structure/BasicConfiguration");
}
}
If you are using Windsor you can then wire this up to Windsor's factory facility.
Hope that helps.
There is a long running habit here where I work that the connection string lives in the web.config, a Sql Connection object is instantiated in a using block with that connection string and passed to the DataObjects constructor (via a CreateInstance Method as the constructor is private). Something like this:
using(SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString))
{
DataObject foo = DataObject.CreateInstance(conn);
foo.someProperty = "some value";
foo.Insert();
}
This all smells to me.. I don't know. Shouldn't the DataLayer class library be responsible for Connection objects and Connection strings? I'd be grateful to know what others are doing or any good online articles about these kind of design decisions.
Consider that the projects we work on are always Sql Server backends and that is extremely unlikely to change. So factory and provider pattern is not what I'm after. It's more about where responsibility lies and where config settings should be managed for data layer operation.
I like to code the classes in my data access layer so that they have one constructor that takes an IDbConnection as a parameter, and another that takes a (connection) string.
That way the calling code can either construct its own SqlConnection and pass it in (handy for integration tests), mock an IDbConnection and pass that in (handy for unit tests) or read a connection string from a configuration file (eg web.config) and pass that in.
Hm, I think I agree that the datalayer should be responsible for managing such connection strings so the higher layers don't need to worry about this. However, I do not think that the SQLConnection should worry where the connection string comes from.
I think, I would have a datalayer which provides certain DataInputs, that is, things that take a condition and return DataObjects. Such a DataInput now knows "hey, this DataObjects are stored in THAT Database, and using the Configurations, I can use some connection-string to get an SQL-Connection from over there.
That way you have encapsulated the entire process of "How and where do the data objects come from?" and the internals of the datalayer can still be tested properly. (And, as a side effect, you can easily use different databases, or even multiple different databases at the same time. Such flexibility that just pops up is a good sign(tm))
this being a "smell" is relative. if you are pretty sure about coupling this particular piece of code to SQL Server and a web.config connection string entry then it's perfectly OK. if you are not into this kind of coupling, I agree that it is a code smell and is undesirable.