I am using the Entity framework for the first time, and would like to know if I am using in the best practice.
I have created a separate class in my business logic which will handle the entity context. the problem I have, is in all the videos I have seen they usually wrap the context in a using statement to make sure its closed, but obviously I can't do this in my business logic as the context will be closed before I can actually use it?
So is this ok what I'm doing? A couple of examples:
public IEnumerable<Article> GetLatestArticles(bool Authorised)
{
var ctx = new ArticleNetEntities();
return ctx.Articles.Where(x => x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public IEnumerable<Article> GetArticlesByMember(int MemberId, bool Authorised)
{
var ctx = new ArticleNetEntities();
return ctx.Articles.Where(x => x.MemberID == MemberId && x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
I just want to make sure I'm not building something that's going to die when a lot of people use it?
It really depends on how to want to expose your repository/data store.
Not sure what you mean by "the context will be closed, therefore i cannot do business logic". Do your business logic inside the using statement. Or if your business logic is in a different class, then let's continue. :)
Some people return concrete collections from their Repository, in which case you can wrap the context in the using statement:
public class ArticleRepository
{
public List<Article> GetArticles()
{
List<Article> articles = null;
using (var db = new ArticleNetEntities())
{
articles = db.Articles.Where(something).Take(some).ToList();
}
}
}
Advantage of that is satisfying the good practice with connections - open as late as you can, and close as early as you can.
You can encapsulate all your business logic inside the using statement.
The disadvantages - your Repository becomes aware of business-logic, which i personally do not like, and you end up with a different method for each particular scenario.
The second option - new up a context as part of the Repository, and make it implement IDisposable.
public class ArticleRepository : IDisposable
{
ArticleNetEntities db;
public ArticleRepository()
{
db = new ArticleNetEntities();
}
public List<Article> GetArticles()
{
List<Article> articles = null;
db.Articles.Where(something).Take(some).ToList();
}
public void Dispose()
{
db.Dispose();
}
}
And then:
using (var repository = new ArticleRepository())
{
var articles = repository.GetArticles();
}
Or the third-option (my favourite), use dependency injection. Decouple all the context-work from your Repository, and let the DI container handle disposal of resources:
public class ArticleRepository
{
private IObjectContext _ctx;
public ArticleRepository(IObjectContext ctx)
{
_ctx = ctx;
}
public IQueryable<Article> Find()
{
return _ctx.Articles;
}
}
Your chosen DI container will inject the concrete ObjectContext into the instantiation of the Repository, with a configured lifetime (Singleton, HttpContext, ThreadLocal, etc), and dispose of it based on that configuration.
I have it setup so each HTTP Request gets given a new Context. When the Request is finished, my DI container will automatically dispose of the context.
I also use the Unit of Work pattern here to allow multiple Repositories to work with one Object Context.
You may have also noticed I prefer to return IQueryable from my Repository (as opposed to a concrete List). Much more powerful (yet risky, if you don't understand the implications). My service layer performs the business logic on the IQueryable and then returns the concrete collection to the UI.
That is my far the most powerful option, as it allows a simple as heck Repository, the Unit Of Work manages the context, the Service Layer manages the Business Logic, and the DI container handles the lifetime/disposal of resources/objects.
Let me know if you want more info on that - as there is quite a lot to it, even more than this surprisingly long answer. :)
I would have the ctx as a private variable within each class, then create a new instance of this each time and then dispose when finished.
public class ArticleService
{
private ArticleEntities _ctx;
public ArticleService()
{
_ctx = new ArticleEntities();
}
public IEnumerable<Article> GetLatestArticles(bool Authorised)
{
return _ctx.Articles.Where(x => x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public IEnumerable<Article> GetArticlesByMember(int MemberId, bool Authorised)
{
return _ctx.Articles.Where(x => x.MemberID == MemberId && x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public void Dispose()
{
_ctx.Dispose();
_ctx = null;
}
}
Then when calling this.
ArticleService articleService = new ArticleService();
IEnumerable<Article> article = articleService.GetLatestArticles(true);
articleService.Dispose(); // killing the connection
This way you can also add/update other objects within the same context and call a save method which saves any changes to the db through the Entity.
In my experience this code is not good, because you lose the capacity to navigate relationships through navigation properties.
public List<Articles> getArticles( ){
using (var db = new ArticleNetEntities())
{
articles = db.Articles.Where(something).ToList();
}
}
Using this approach you can't use the following code because a.Members is always null( db context is close and cant get data automatically).
var articles = Data.getArticles();
foreach( var a in articles ) {
if( a.Members.any(p=>p.Name=="miki") ) {
...
}
else {
...
}
}
}
Using only a global db context is a bad idea because you must use a delete changes function
in a point of your application yo do this but don't save changes and close the window
var article= globalcontext.getArticleByID(10);
article.Approved=true;
then in another point of application you make some operation and save
//..... something
globalcontext.saveChanges();
in this case previous article approved property is set to modified by entity framework. When you save, approved is set true!!!
Best approach for me is use 1 context per class
You can pass context to another external method if you need
class EditArticle {
private DbEntities de;
private currentAricle;
public EditArticle() {
de = new DbEntities; //inizialize on new istance
}
loadArticleToEdit(Articele a){
// a is from another context
currentArticle= de.Article.Single(p=>p.IdArticle==a.IdArticle){
}
private saveChanges(){
...
pe.saveChanges();
}
}
What you can also do is store your context at a higher level.
E.g., you can have a static class storing the current context:
class ContextManager
{
[ThreadStatic]
public static ArticleEntities CurrentContext;
}
Then, somewhere outside you do something like this:
using (ContextManager.CurrentContext = new ArticleEntities())
{
IEnumerable<Article> article = articleService.GetLatestArticles(true);
}
Then, inside the GetLastestArticles, you just use the same ContextManager.CurrentContext.
Of course, this is just the basic idea. You can make this a lot more workable by using service providers, IoC and such.
You can start preparing Entity Framework from data access layer by creating a generic repository class for all required Entity Framework functions. Then you can used it in Business layer (Encapsulated)
Here are the best practices that I have used for Entity Framework in data, business, and UI layers
Techniques used for this practice:
Applying SOLID architecture principles
Using Repository design pattern
Only one class to go (and you will find it ready)
Related
I'm working on a classic .Net Framework Web API solution.
I have 3 layers. Let's call them
MVC - with POST, GET, UPDATE, DELETE controllers.
BIZZ - for business with my service class. My service class are king of repositories with CREATE, READ, UPDATE, DELETE and specific methods.
DATA - with POCO and definition of DB context.
I will not develop the EF layer. It is a classic Entity Framework project with POCO.Here is a sample of a Service and with BaseService class
public abstract class Service : IDisposable
{
protected DbContext dbContext = new DbContext();
public void Dispose()
{
dbContext.Dispose();
}
}
Then I have a cart service and a order service. They are similar in their structure so I will only write the code useful for this example.
public class CartService : Service
{
public Cart Create(Cart cart)
{
// Create the cart
}
public Cart Read(Guid id)
{
// Read
}
public Cart Update(Cart cart)
{
// I do some check first then
}
public void Delete(Cart cart)
{
// Delete
}
public void Checkout(Cart cart)
{
// Validation of cart removed in this example
dbContext.Cart.Attach(cart);
cart.DateCheckout = DateTime.UtcNow;
dbContext.Entry(cart).State = EntityState.Modified; // I think this line can be removed
dbContext.SaveChanges();
using (var orderService = new OrderService())
{
foreach (var order in cart.Orders)
{
order.DateCheckout = cart.DateCheckout;
order.Status = OrderStatus.PD; // pending
orderService.Update(order);
}
}
}
}
public class OrderService : Service
{
public Cart Create(Cart cart)
{
// Create the cart
}
public Cart Read(Guid id)
{
// Read
}
public Cart Update(Cart cart)
{
dbContext.Entry(order).State = EntityState.Modified;
dbContext.SaveChanges();
// More process here...
return order;
}
public void Delete(Cart cart)
{
// Delete
}
}
So, I have a service, cart service, that call another service, order service. I must work like this because I cannot simply accept the cart and all orders in it as it is. When I save a new order or update an existing order I must create a record in some other tables in other databases. The code is not in my example. So, I repeat I have a service that call another service and then I have 2 dbContext. At best this just create 2 context in memory, at worst this create exception. Exception like you cannot attach an entity to 2 contexts or this entity is not in context.
Well, I would like all my service use the same context. I suppose you will al tell me to use Dependency Injection. Yes, well ok but I don't want, each time I create a new service have to pass the context. I don't want to have to do that:
public void Checkout(Cart cart)
{
// ...
using (var orderService = new OrderService(dbContext))
{
// ...
}
}
I would like to do something that impact my base service only if possible. A singleton maybe... At this point I can see your face. Yes I know Singleton are soo bad. Yes but i'm doing a IIS Web API. Each request is a new instance. I don't care about the impact of the singleton. And I can load my database by changing the connection string in config file so the benefit of DI is there already. Well, I also know it is possible to have singleton with DI. I just don't know how.
So, what can I do to be sure I share my dbContext with all my services?
Disclaimer: This example is not intended to be a "good" one and certainly does not follow best practices, but faced with an existing legacy code base which from your example already suffers from a number of questionable practices, this should get you past the multiple context issues.
Essentially if you're not already using a IoC Container to perform dependency injection then what you need is to introduce a unit of work to manage the scope of a DbContext where your base Service class provides a DbContext provided by the unit of work. (Essentially a DbContext Registry)
For the unit of work and assuming EF6 I would recommend Mehdime's DbContextScope which is available as a NuGet package. Alternatively you can find the source code on Github and implement something similar without too much trouble. I like this pattern because it leverages the CallContext to serve as the communication layer between the ContextScope (Unit of Work) created by the DbContextScopeFactory and the AmbientDbContextScope. This will probably take a little time to get your head around but it injects very nicely into legacy applications where you want to leverage the Unit of Work and don't have dependency injection.
What it would look like:
In your Service class you would introduce the AmbientDbContextLocator to resolve your DbContext:
private readonly IAmbientDbContextLocator _contextLocator = new AmbientDbContextLocator();
protected DbContext DbContext
{
get { return _contextLocator.Get<DbContext>(); }
}
And that's it. Later as you refactor to accommodate Dependency injection, just inject the AmbientDbContextLocator instead of 'new'ing it up.
Then, in your web API controllers where you are using your services, (not the services themselves) you need to add the DbContextScopeFactory instance..
private readonly IDbContextScopeFactory _contextScopeFactory = new DbContextScopeFactory();
Lastly, in your API methods, when you want to call your services, you need to simply use the ContextScopeFactory to create a context scope. The AmbientDbContextLocators will retrieve the DbContext from this context scope. The context scope you create with the factory will be done in a using block to ensure your contexts are disposed. So, using your Checkout method as an example, it would look like:
In your Web API [HttpPost] Checkout() method:
using (var contextScope = _contextScopeFactory.Create())
{
using(var service = new CartService())
{
service.Checkout();
}
contextScope.SaveChanges();
}
Your cart service Checkout method would remain relatively unchanged, only instead of accessing dbContext as a variable (new DbContext()) it will access the DbContext property which gets the context through the context locator.
The Services can continue to call DbContext.SaveChanges(), but this isn't necessary and the changes will not be committed to the DB until the contextScope.SaveChanges() is called. Each service will have its own instance of the Context Locator rather than the DbContext and these will be dependent on you defining a ContextScope to function. If you call a Service method that tries to access the DbContext without being within a using (var contextScope = _contextScopeFactory.Create()) block you will receive an error. This way all of your service calls, even nested service calls (CartService calls OrderService) will be interacting with the same DbContext instance.
Even if you just want to read data, you can leverage a slightly faster DbContext using _contextScopeFactory.CreateReadOnly() which will help guard against unexpected/disallowed calls to SaveChanges().
When using the ASP.NET Core stack, the tutorial for using EF with it defaults to using DI to provide your DB context, just not with a service layer. That said, it actually does the right thing for this out of the box. I'll give a brief rundown of the bare minimum necessary for this to work, using whatever the latest versions of ASP.NET Core Web API and EF Core were on NuGet at the time of writing.
First, let's get the boilerplate out of the way, starting with the model:
Models.cs
public class ShopContext : DbContext
{
public ShopContext(DbContextOptions options) : base(options) {}
// We add a GUID here so we're able to tell it's the same object later.
public string Id { get; } = Guid.NewGuid().ToString();
public DbSet<Cart> Carts { get; set; }
public DbSet<Order> Orders { get; set; }
}
public class Cart
{
public string Id { get; set; }
public string Name { get; set; }
}
public class Order
{
public string Id { get; set; }
public string Name { get; set; }
}
Then some bare-bones services:
Services.cs
public class CartService
{
ShopContext _ctx;
public CartService(ShopContext ctx)
{
_ctx = ctx;
Console.WriteLine($"Context in CartService: {ctx.Id}");
}
public async Task<List<Cart>> List() => await _ctx.Carts.ToListAsync();
public async Task<Cart> Create(string name)
{
return (await _ctx.Carts.AddAsync(new Cart {Name = name})).Entity;
}
}
public class OrderService
{
ShopContext _ctx;
public OrderService(ShopContext ctx)
{
_ctx = ctx;
Console.WriteLine($"Context in OrderService: {ctx.Id}");
}
public async Task<List<Order>> List() => await _ctx.Orders.ToListAsync();
public async Task<Order> Create(string name)
{
return (await _ctx.Orders.AddAsync(new Order {Name = name})).Entity;
}
}
The only notable things here are: the context comes in as a constructor parameter as God intended, and we log the ID of the context to verify when it gets created with what.
Then our controller:
ShopController.cs
[ApiController]
[Route("[controller]")]
public class ShopController : ControllerBase
{
ShopContext _ctx;
CartService _cart;
OrderService _order;
public ShopController(ShopContext ctx, CartService cart, OrderService order)
{
Console.WriteLine($"Context in ShopController: {ctx.Id}");
_ctx = ctx;
_cart = cart;
_order = order;
}
[HttpGet]
public async Task<IEnumerable<string>> Get()
{
var carts = await _cart.List();
var orders = await _order.List();
return (from c in carts select c.Name).Concat(from o in orders select o.Name);
}
[HttpPost]
public async Task Post(string name)
{
await _cart.Create(name);
await _order.Create(name);
await _ctx.SaveChangesAsync();
}
}
As above, we take the context as a constructor parameter to triple-check it's what it should be; we also need it to call SaveChanges at the end of an operation. (You can refactor this out of controllers if you want to, but they'll work just fine as units of work for now.)
The part that ties this together is the DI configuration:
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
// Use whichever provider you have here, this is where you grab a connection string from the app configuration.
services.AddDbContext<ShopContext>(options =>
options.UseInMemoryDatabase("Initrode"));
services.AddScoped<CartService>();
services.AddScoped<OrderService>();
}
AddDbContext() defaults to registering a DbContext to be created per-request by the container. Web API provides the AddControllers method that puts those into the DI container, and we also register our services manually.
The rest of Startup.cs I've left as-is.
Starting this up and opening https://localhost:5001/shop should log something like:
Context in CartService: b213966e-35f2-4cc9-83d1-98a5614742a3
Context in OrderService: b213966e-35f2-4cc9-83d1-98a5614742a3
Context in ShopController: b213966e-35f2-4cc9-83d1-98a5614742a3
with the same GUID for all three lines in a request, but a different GUID between requests.
A little additional explanation of what goes on above:
Registering a component in a container (using Add() and such above) means telling the container those components exist and that it should create them for you when asked, as well as what identifiers they're available under and how to create them. The defaults for this are more or less "make the component available as its class, and create it by calling its one public constructor, passing other registered components into it" - the container looks at the constructor signature to figure this out.
"Scoped" in an ASP.NET Core app means "per-request." I think in this case one could also use services with a transient lifetime - a new one created every time it's needed, but they'll still get the same DbContext as long as they're created while handling the same request. Which one to do is a design consideration; the main constraint is that you can't inject shorter-lived components into longer-lived components without having to use more complex techniques, which is why I favour having all components as short-lived as possible. In other words, I only make things longer-lived when they actually hold some state that needs to live for that time, while also doing that as sparingly as possible because state bad. (Just recently I had to refactor an unfortunate design where my services were singletons, but I wanted my repositories to be per-request so as to be able to inject the currently logged in user's information into the repository to be able to automatically add the "created by" and "updated by" fields.)
You'll note that with support for doing things this way being built-in to both ASP.NET Core and EF Core, there's actually very little extra code involved. Also, the only thing needed to go from "injecting a context into your controllers" (as the tutorial does) to "injecting a context into services that you use from your controllers" is adding the services into DI - since the controller and context are already under DI, anything new you add can be injected into them and vice versa.
This should give you a quick introduction into how to make things "just work" and shows you the basic use case of a DI container: you declaratively tell it or it infers "this is an X", "this is an Y", "this is a Z and it needs to be created using an X and a Y"; then when you ask the container to give you a Z, it will automagically first create an X and Y, then create Z with them. They also manage the scope and lifetime of these objects, i.e. only create one of a type for an API request. Beyond that it's a question of experience with them and familiarity with a given container - say Ninject and Autofac are much more powerful than the built-in one - but it's variations on the same idea of declaratively describing how to create an object possibly using other objects (its dependencies) and having the container "figure out" how to wire things together.
I'm trying to wrap a transaction around 2 or more database operations which occur in different repository classes. Each repository class uses a DbContext instance, using Dependency Injection. I'm using Entity Framework Core 2.1.
public PizzaService(IPizzaRepo pizzaRepo, IPizzaIngredientsRepo ingredientRepo)
{
_pizzaRepo = pizzaRepo;
_ingredientRepo = ingredientRepo;
}
public async Task SavePizza(PizzaViewModel pizza)
{
using (var scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted }))
{
int pizzaRows = await _pizzaRepo.AddEntityAsync(pizza.Pizza);
int ingredientRows = await _ingredientRepo.PutIngredientsOnPizza(
pizza.Pizza.PizzaId,
pizza.Ingredients.Select(x => x.IngredientId).ToArray());
scope.Complete();
}
}
}
Obviously, if one of the operations fails, I want to rollback the entire thing.
Will this transaction scope be enough to rollback or should the repository classes have transactions on their own?
Even if above methods works, are there better ways to implement transactions?
Repository patterns are great for enabling testing, but do not have a repository new up a DbContext, share the context across repositories.
As a bare-bones example (assuming you are using DI/IoC)
The DbContext is registered with your IoC container with a lifetime scope of Per Request. So at the onset of the service call:
public PizzaService(PizzaDbContext context, IPizzaRepo pizzaRepo, IPizzaIngredientsRepo ingredientRepo)
{
_context = pizzaContext;
_pizzaRepo = pizzaRepo;
_ingredientRepo = ingredientRepo;
}
public async Task SavePizza(PizzaViewModel pizza)
{
int pizzaRows = await _pizzaRepo.AddEntityAsync(pizza.Pizza);
int ingredientRows = await _ingredientRepo.PutIngredientsOnPizza(
pizza.Pizza.PizzaId,
pizza.Ingredients.Select(x => x.IngredientId).ToArray());
_context.SaveChanges();
}
Then in the repositories:
public class PizzaRepository : IPizzaRepository
{
private readonly PizzaDbContext _pizzaDbContext = null;
public PizzaRepository(PizzaDbContext pizzaDbContext)
{
_pizzaDbContext = pizzaDbContext;
}
public async Task<int> AddEntityAsync( /* params */ )
{
PizzaContext.Pizzas.Add( /* pizza */)
// ...
}
}
The trouble I have with this pattern is that it restricts the unit of work to the request, and only the request. You have to be aware of when and where the context save changes occurs. You don't want repositories for example to call SaveChanges as that could have side effects depending on what was changed as far as the context goes prior to that being called.
As a result I use a Unit of Work pattern to manage the lifetime scope of the DbContext(s) where repositories no longer get injected with a DbContext, they instead get a locator, and the services get a context scope factory. (Unit of work) The implementation I use for EF(6) is Mehdime's DbContextScope. (https://github.com/mehdime/DbContextScope) There are forks available for EFCore. (https://www.nuget.org/packages/DbContextScope.EfCore/) With the DBContextScope the service call looks more like:
public PizzaService(IDbContextScopeFactory contextScopeFactory, IPizzaRepo pizzaRepo, IPizzaIngredientsRepo ingredientRepo)
{
_contextScopeFactory = contextScopeFactory;
_pizzaRepo = pizzaRepo;
_ingredientRepo = ingredientRepo;
}
public async Task SavePizza(PizzaViewModel pizza)
{
using (var contextScope = _contextScopeFactory.Create())
{
int pizzaRows = await _pizzaRepo.AddEntityAsync(pizza.Pizza);
int ingredientRows = await _ingredientRepo.PutIngredientsOnPizza(
pizza.Pizza.PizzaId,
pizza.Ingredients.Select(x => x.IngredientId).ToArray());
contextScope.SaveChanges();
}
}
Then in the repositories:
public class PizzaRepository : IPizzaRepository
{
private readonly IAmbientDbContextLocator _contextLocator = null;
private PizzaContext PizzaContext
{
get { return _contextLocator.Get<PizzaContext>(); }
}
public PizzaRepository(IDbContextScopeLocator contextLocator)
{
_contextLocator = contextLocator;
}
public async Task<int> AddEntityAsync( /* params */ )
{
PizzaContext.Pizzas.Add( /* pizza */)
// ...
}
}
This gives you a couple benefits:
The control of the unit of work scope remains clearly in the service. You can call any number of repositories and the changes will be committed, or rolled back based on the determination of the service. (inspecting results, catching exceptions, etc.)
This model works extremely well with bounded contexts. In larger systems you may split different concerns across multiple DbContexts. The context locator serves as one dependency for a repository and can access any/all DbContexts. (Think logging, auditing, etc.)
There is also a slight performance/safety option for Read-based operations using the CreateReadOnly() scope creation in the factory. This creates a context scope that cannot be saved so it guarantees no write operations get committed to the database.
The IDbContextScopeFactory and IDbContextScope are easily mock-able so that your service unit tests can validate if a transaction is committed or not. (Mock an IDbContextScope to assert SaveChanges, and mock an IDbContextScopeFactory to expect a Create and return the DbContextScope mock.) Between that and the Repository pattern, No messy mocking DbContexts.
One caution that I see in your example is that it appears that your View Model is serving as a wrapper for your entity. (PizzaViewModel.Pizza) I'd advise against ever passing an entity to the client, rather let the view model represent just the data that is needed for the view. I outline the reasons for this here.
I'm tasked with implementing a Business Object / Data Access Layer for a project and have to expect thousands of users concurrently.
I've always used singletons to manage the DAL but I never game too much thought about how it would behave with so many multiple users at the same time, so I'd like to ask the proper use for it.
I have:
public class UserDAL
{
private static UserDAL _userDAL = null;
//Private constructor
private UserDAL() { }
public static UserDAL GetInstance()
{
if(_userDAL == null)
{ _userDAL = new UserDAL(); }
return _userDAL;
}
//Example of a method
public User GetUsers()
{
IDataReader dataReader = ConnectionFactory.GetConnection().ExecuteSomeQuery("queryHere");
}
}
For my Connection Factory I don't think it's a problem, although I did read that it's best to leave the connection pooling to ADO.NET itself:
public sealed class ConnectionFactory
{
private static string _connectionString = ConfigurationManager.ConnectionStrings["ConnectionName"].ConnectionString;
//My connection interface
private static IConnection _connection = null;
public static IConnection GetConnection()
{
if(_connection == null)
{
//some checks to determine the type
_connection = new SQLConnection(_connectionString);
}
return _connection;
}
}
I'm also using the singleton pattern in the BO, although I don't think it's necessary:
public class UserBO
{
private static UserBO _userBO = null;
private static UserDAL _userDAL = null;
private UserBO() { }
public static UserBO GetInstance()
{
if(_userBO == null)
{
_userBO = new UserBO();
_userDAL = UserDAL.GetInstance();
}
return _userDAL;
}
//Example of a method
public User GetUser()
{
//Rules
return _userDAL.GetUsers();
//return UserDAL.GetInstance().GetUsers(); //or this
}
}
I'm doing it like this just so I can call in the UI/Presentation layer:
User someUser = UserBO.GetInstance().GetUser(1);
This worked for me for the applications I've made so far, but I'm guessing it's because there wasn't too many users simultaneously.
I'm worried about what would happen in the UserDAL instance when a second user requests something but there's already a 1st user doing some heavy operation in it.
Should I drop this pattern in the BO/DAL layer and leave it only in the ConnectionFactory? Are there any issues which I should expect if I use this?
I would definitely drop it altogether, especially for the Connection: the connectionFactory could be static, but return a new connection each time it is asked: ADO.NET is very good at managing connection pooling and you just need to get out of it's way.
In anything which has changeable state keep away from singletons. This includes ADO.NET connections, and your actual Business Objects. Having one user mutate the state of an object that is being used by another user can lead to all sorts of strange bugs: in a web site, you basically have a massively multithreaded application and changeable singletons are very bad news!
You do need to come up with some sort of locking strategy, though, for when two or more users change copies of the same business object. A valid strategy includes saying 'Actually, this isn't going to be a problem so I'll ignore it' - but only if you have thought about it. The two basic strategies are Optimistic and Pessimistic Locking.
Optimistic Locking means that you optimistically think mostly the users won't change the same things (for whatever reason) and so you don't put Database locks on read data. This is the only possibility on a Web Site
Pessimistic locking says all possibly changed data will, when read, have DB Locks applied until the user is finished with it. This means keeping a Transaction open, and it's not practical for a Web Site.
Optimistic Locking can be implemented by creating Update Statements which update a row only where all columns which haven't been changed by the current user also haven't been changed in the database; if they have, someone else has changed the same row. Alternatively, you can add a column to all tables - version int not null - and update where the version hasn't changed since you read the object; you also increment the version number in every update.
If either method fails, you need to reread the now-current data and get your user to confirm or re-apply their changes. Bit of a pain but can be necessary.
I would advise you to move away from the Singleton pattern for testability: Dependency Injection & Singleton Design pattern
Instead, take a look at Dependency Injection. Ninject is a good way to start.
DI will take care of wiring the BO and DAL together:
public interface IUserRepository
{
IEnumerable<User> GetUsers();
}
public class UserBO
{
private readonly IUserRepository _userRepository;
public UserBO(IUserRepository userRepository){
_userRepository = userRepository;
}
public IEnumerable<User> GetUsers()
{
return _userRepository.GetUsers();
}
}
As for reusing the Connection Pool: Should you reuse SqlConnection, SqlDataAdapter, and SqlCommand objects?
I would like to know if it's a good practice to create a static class to get the Entity Database Context.
ThisGetEntity() return the Context. In the GetEntity method, I have a dynamic connection.
When someone go to my login page, they need to provide a database number + Username + Password. I stock the dbname in Session["DBName"].
public static class EntityFactory
{
public static DBEntities GetEntity()
{
var scsb = new SqlConnectionStringBuilder();
scsb.DataSource = ConfigurationManager.AppSettings["DataSource"];
scsb.InitialCatalog = "db1";
scsb.MultipleActiveResultSets = true;
scsb.IntegratedSecurity = true;
if (HttpContext.Current.Session["DBName"] == null)
{
HttpContext.Current.Response.Redirect("/Account/Step1");
}
else
{
scsb.InitialCatalog = HttpContext.Current.Session["DBName"].ToString();
}
var builder = new EntityConnectionStringBuilder();
builder.Metadata = "res://*/nms.bin.Models.DBModel.csdl|res://*/nms.bin.Models.DBModel.ssdl|res://*/nms.bin.Models.DBModel.msl";
builder.Provider = "System.Data.SqlClient";
builder.ProviderConnectionString = scsb.ConnectionString;
DBEntities db = new DBEntities(builder.ConnectionString);
return db;
}
When I want to get the DBContext by example in a controler, I Just need to do EntityFactory.GetEntity() and that returns me a DB context.
Is it Correct the way I do this
Is that could be a problem if 20 clients log at the same time but with a different dbname.
For the moment, I'm not using any dispose, Is it a problem? Based on my EntityFactory Class, can I make a global disposable in that class that will be call automaticly. (I think about the descrutor method).
The static factory method can be difficult to mock for unit testing. So fro example in your controller if you had:
public void SomeControllerMethod()
{
var entities = EntityFactory.GetEntity();
return entities.Something // ... get whatever data...
}
Then how would you use a mocked data context in a unit test? It would be difficult to do.
It would be better to "inject" your context into your controller, typically through the constructor (Read the Wikipedia article on the "dependency inversion principal" if you aren't familiar with the concept), like:
public class SomeController
{
private readonly IDBEntities entities;
// db context passed in through constructor,
// to decouple the controller from the backing implementation.
public void SomeController(IDBEntities entities)
{
this.entities = entities;
}
}
And then have the controllers methods use that passed in reference. This way you can use a dependency injection tool to get the appropriate db context, or pass in a mocked context.
I'm not sure if MVC2 had a good way to add a dependency injection framework though, but I know MVC3 does.
Your approach works too, there is nothing fundamentally wrong with it, it just seems harder to test. Of course if you aren't doing any unit testing and don't need to use a mock data store, then I guess it really doesn't matter :)
I typically end up using MVC3 with EntityFramework Code-First, which turns out pretty nice, and you can mock most of the data layer with List<T> instead of the actual database, you can "load" and "save" records to in-memory lists and never touch the real database.
in order :
You can improve it by passing to GetEntity() all the info it needs (like the dbname, username and password). As it is now the static method is tightly coupled with the session. Move the session out from the method.
It should not as the Session is per user.
If DBEntities inherits from DbContext you can call the Dispose after you've used the object. Es: dbEntitiesObj.Dispose();
In previous question folks helped me to solve repository lifetime problem, now there's a question how to make it work nicely in composite service.
let's say i have services:
public class OrderService : IOrderService
{
IRepository<Order> orderRepository;
public OrderService(IRepositoryFactory repositoryFactory)
{
orderRepository = repositoryFactory.GetRepository<Order>();
}
public void CreateOrder(OrderData orderData)
{
...
orderRepository.SubmitChanges();
}
}
public class ReservationService : IReservationService
{
IRepository<Reservation> reservationRepository;
public ReservationService(IRepositoryFactory repositoryFactory)
{
reservationRepository = repositoryFactory.GetRepository<Reservation>();
}
public void MakeReservations(OrderData orderData)
{
...
reservationService.SubmitChanges();
}
}
And now the intersting part - composition service:
public class CompositionService : ICompositionService {
IOrderService orderService;
IReservationService reservationService;
public CompositionService(IOrderService orderService, IReservationService reservationService)
{
this.orderService = orderService;
this.reservationService = reservationService;
}
public void CreateOrderAndMakeReservations(OrderData orderData)
{
using (var ts = new TransactionScope())
{
orderService.CreateOrder(orderData);
reservationService.MakeReservations(orderData);
ts.Complete();
}
}
}
Problem is, that it won't work correctly if IRepositoryFactory lifestyle is transient (because you would get two different datacontexts and that would require distributed transactions to be enabled, which we try to avoid). Any ides how to write this correctly?
My observations:
In general, factories should be singletons. If your factory isn't a singleton, then you are probably just hiding another factory behind it.
Factories are meant for creating objects on demand. Your code simply creates a repository in the constructor, so I don't really see the difference between that and simply making the repository a direct injection parameter in the constructor.
These all seem to me like a workarounds around a more fundamental problem (described in your first question) and these workarounds only make the problem more complicated. Unless you solve the root problem you will end up with a complex dependency schema and a smelly code.
IMO - this is a Distributed Transaction scenario.
In the example you mentioned, OrderService & ReservationService use the same data context is an implementation detail hidden in the code.
I don't think it is correct to pass this knowledge up to the CompositionService by wrapping the service calls in a TransactionScope as now the composition service is aware of the shared data context & so needs to use a TransactionScope to run the code correctly.
In my opinion, the composition service code should look like:
try{
if(orderService.TryCreateOrder(orderData)){
if(reservationService.TryMakeReservation(orderData)){
reservationService.Commit();
orderService.Commit();
}
else{
orderService.TryRollbackOrder(orderData);
throw new ReservationCouldNotBeMadeException();
}
}
else{
throw new OrderCouldNotBeCreatedException();
}
}
catch(CouldNotRollbackOrderServiceException){
// do something here...
}
catch(CouldNotCommitServiceException){
// do something here...
}
In this case, the OrderService.TryCreateOrder method will insert an Order with a PendingReservation status or some other relevant status which indicates that the Order is inserted, but not completed. This state will change on the commits are called on the services (UnitOfWork pattern?)
In this case, the implementation details of the services are completely hidden from the consumer of the service, while composition is also possible, independent on the underlying implementation detail.
HTH.