I'm tasked with implementing a Business Object / Data Access Layer for a project and have to expect thousands of users concurrently.
I've always used singletons to manage the DAL but I never game too much thought about how it would behave with so many multiple users at the same time, so I'd like to ask the proper use for it.
I have:
public class UserDAL
{
private static UserDAL _userDAL = null;
//Private constructor
private UserDAL() { }
public static UserDAL GetInstance()
{
if(_userDAL == null)
{ _userDAL = new UserDAL(); }
return _userDAL;
}
//Example of a method
public User GetUsers()
{
IDataReader dataReader = ConnectionFactory.GetConnection().ExecuteSomeQuery("queryHere");
}
}
For my Connection Factory I don't think it's a problem, although I did read that it's best to leave the connection pooling to ADO.NET itself:
public sealed class ConnectionFactory
{
private static string _connectionString = ConfigurationManager.ConnectionStrings["ConnectionName"].ConnectionString;
//My connection interface
private static IConnection _connection = null;
public static IConnection GetConnection()
{
if(_connection == null)
{
//some checks to determine the type
_connection = new SQLConnection(_connectionString);
}
return _connection;
}
}
I'm also using the singleton pattern in the BO, although I don't think it's necessary:
public class UserBO
{
private static UserBO _userBO = null;
private static UserDAL _userDAL = null;
private UserBO() { }
public static UserBO GetInstance()
{
if(_userBO == null)
{
_userBO = new UserBO();
_userDAL = UserDAL.GetInstance();
}
return _userDAL;
}
//Example of a method
public User GetUser()
{
//Rules
return _userDAL.GetUsers();
//return UserDAL.GetInstance().GetUsers(); //or this
}
}
I'm doing it like this just so I can call in the UI/Presentation layer:
User someUser = UserBO.GetInstance().GetUser(1);
This worked for me for the applications I've made so far, but I'm guessing it's because there wasn't too many users simultaneously.
I'm worried about what would happen in the UserDAL instance when a second user requests something but there's already a 1st user doing some heavy operation in it.
Should I drop this pattern in the BO/DAL layer and leave it only in the ConnectionFactory? Are there any issues which I should expect if I use this?
I would definitely drop it altogether, especially for the Connection: the connectionFactory could be static, but return a new connection each time it is asked: ADO.NET is very good at managing connection pooling and you just need to get out of it's way.
In anything which has changeable state keep away from singletons. This includes ADO.NET connections, and your actual Business Objects. Having one user mutate the state of an object that is being used by another user can lead to all sorts of strange bugs: in a web site, you basically have a massively multithreaded application and changeable singletons are very bad news!
You do need to come up with some sort of locking strategy, though, for when two or more users change copies of the same business object. A valid strategy includes saying 'Actually, this isn't going to be a problem so I'll ignore it' - but only if you have thought about it. The two basic strategies are Optimistic and Pessimistic Locking.
Optimistic Locking means that you optimistically think mostly the users won't change the same things (for whatever reason) and so you don't put Database locks on read data. This is the only possibility on a Web Site
Pessimistic locking says all possibly changed data will, when read, have DB Locks applied until the user is finished with it. This means keeping a Transaction open, and it's not practical for a Web Site.
Optimistic Locking can be implemented by creating Update Statements which update a row only where all columns which haven't been changed by the current user also haven't been changed in the database; if they have, someone else has changed the same row. Alternatively, you can add a column to all tables - version int not null - and update where the version hasn't changed since you read the object; you also increment the version number in every update.
If either method fails, you need to reread the now-current data and get your user to confirm or re-apply their changes. Bit of a pain but can be necessary.
I would advise you to move away from the Singleton pattern for testability: Dependency Injection & Singleton Design pattern
Instead, take a look at Dependency Injection. Ninject is a good way to start.
DI will take care of wiring the BO and DAL together:
public interface IUserRepository
{
IEnumerable<User> GetUsers();
}
public class UserBO
{
private readonly IUserRepository _userRepository;
public UserBO(IUserRepository userRepository){
_userRepository = userRepository;
}
public IEnumerable<User> GetUsers()
{
return _userRepository.GetUsers();
}
}
As for reusing the Connection Pool: Should you reuse SqlConnection, SqlDataAdapter, and SqlCommand objects?
Related
I'm putting together a REST service using ASP.NET Web API & Ninject, though I suspect this might be a more general IoC question than anything specific to my IoC framework. I have a number of objects that need to access a simple cache of User entities:
public class UserCache
{
private IList<User> users;
private IUserRepositoryFactory factory;
[Inject]
public UserCache(IUserRepositoryFactory factory)
{
this.factory = factory;
this.users = new List<User>();
}
public void Add(int id)
{
IUserRepository repo = factory.Create(new TestContext());
this.users.Add(repo.Get(id));
}
public int Count { get { return this.users.Count; } }
}
In practice, the cache is read-through, and will fill itself with User entities using a UserRepository (and associated IUserRepository interface):
public class UserRepository : IUserRepository
{
private readonly TestContext context;
public UserRepository(TestContext context)
{
this.context = context;
}
public User Get(int id)
{
return new User() { Name = "Test User" };
}
}
The cache is long-lived and shared across the entire application. My question is this: I want to use my UserRepository to pull User entities from my database. This repository needs to be injected into the cache somehow, or instantiated using a factory.
The trick is, the only way I've been able to both a) create the cache such that Ninject will inject its dependencies and b) have access to the cache throughout the same is to bind the cache in singleton scope and inject it into objects that need access to it:
kernel.Bind<TestContext>().ToSelf();
kernel.Bind<UserCache>().ToSelf().InSingletonScope();
...and then in a controller (for example):
[Inject]
public UserCache Cache { get; set; }
My question is, is this the best way to treat long-lived objects that require injection? Or is there some better way that I'm missing? I don't want to give the cache (or any other objects like it) direct access to the Ninject kernel.
Isn't this supposed to be the other way around? You should use IUserRepository in your controllers and the repository under the hood should fetch the data from cache (better if done using an interceptor) if it is already cached, otherwise should hit the database.
That way you don't have to worry about lifecycle of the long living cached objects. Remember that at the end of the day the whole WebAPI (so far) runs on the web stack, this means the application can be recycled unexpectedly based on different factors.
I would like to know if it's a good practice to create a static class to get the Entity Database Context.
ThisGetEntity() return the Context. In the GetEntity method, I have a dynamic connection.
When someone go to my login page, they need to provide a database number + Username + Password. I stock the dbname in Session["DBName"].
public static class EntityFactory
{
public static DBEntities GetEntity()
{
var scsb = new SqlConnectionStringBuilder();
scsb.DataSource = ConfigurationManager.AppSettings["DataSource"];
scsb.InitialCatalog = "db1";
scsb.MultipleActiveResultSets = true;
scsb.IntegratedSecurity = true;
if (HttpContext.Current.Session["DBName"] == null)
{
HttpContext.Current.Response.Redirect("/Account/Step1");
}
else
{
scsb.InitialCatalog = HttpContext.Current.Session["DBName"].ToString();
}
var builder = new EntityConnectionStringBuilder();
builder.Metadata = "res://*/nms.bin.Models.DBModel.csdl|res://*/nms.bin.Models.DBModel.ssdl|res://*/nms.bin.Models.DBModel.msl";
builder.Provider = "System.Data.SqlClient";
builder.ProviderConnectionString = scsb.ConnectionString;
DBEntities db = new DBEntities(builder.ConnectionString);
return db;
}
When I want to get the DBContext by example in a controler, I Just need to do EntityFactory.GetEntity() and that returns me a DB context.
Is it Correct the way I do this
Is that could be a problem if 20 clients log at the same time but with a different dbname.
For the moment, I'm not using any dispose, Is it a problem? Based on my EntityFactory Class, can I make a global disposable in that class that will be call automaticly. (I think about the descrutor method).
The static factory method can be difficult to mock for unit testing. So fro example in your controller if you had:
public void SomeControllerMethod()
{
var entities = EntityFactory.GetEntity();
return entities.Something // ... get whatever data...
}
Then how would you use a mocked data context in a unit test? It would be difficult to do.
It would be better to "inject" your context into your controller, typically through the constructor (Read the Wikipedia article on the "dependency inversion principal" if you aren't familiar with the concept), like:
public class SomeController
{
private readonly IDBEntities entities;
// db context passed in through constructor,
// to decouple the controller from the backing implementation.
public void SomeController(IDBEntities entities)
{
this.entities = entities;
}
}
And then have the controllers methods use that passed in reference. This way you can use a dependency injection tool to get the appropriate db context, or pass in a mocked context.
I'm not sure if MVC2 had a good way to add a dependency injection framework though, but I know MVC3 does.
Your approach works too, there is nothing fundamentally wrong with it, it just seems harder to test. Of course if you aren't doing any unit testing and don't need to use a mock data store, then I guess it really doesn't matter :)
I typically end up using MVC3 with EntityFramework Code-First, which turns out pretty nice, and you can mock most of the data layer with List<T> instead of the actual database, you can "load" and "save" records to in-memory lists and never touch the real database.
in order :
You can improve it by passing to GetEntity() all the info it needs (like the dbname, username and password). As it is now the static method is tightly coupled with the session. Move the session out from the method.
It should not as the Session is per user.
If DBEntities inherits from DbContext you can call the Dispose after you've used the object. Es: dbEntitiesObj.Dispose();
In previous question folks helped me to solve repository lifetime problem, now there's a question how to make it work nicely in composite service.
let's say i have services:
public class OrderService : IOrderService
{
IRepository<Order> orderRepository;
public OrderService(IRepositoryFactory repositoryFactory)
{
orderRepository = repositoryFactory.GetRepository<Order>();
}
public void CreateOrder(OrderData orderData)
{
...
orderRepository.SubmitChanges();
}
}
public class ReservationService : IReservationService
{
IRepository<Reservation> reservationRepository;
public ReservationService(IRepositoryFactory repositoryFactory)
{
reservationRepository = repositoryFactory.GetRepository<Reservation>();
}
public void MakeReservations(OrderData orderData)
{
...
reservationService.SubmitChanges();
}
}
And now the intersting part - composition service:
public class CompositionService : ICompositionService {
IOrderService orderService;
IReservationService reservationService;
public CompositionService(IOrderService orderService, IReservationService reservationService)
{
this.orderService = orderService;
this.reservationService = reservationService;
}
public void CreateOrderAndMakeReservations(OrderData orderData)
{
using (var ts = new TransactionScope())
{
orderService.CreateOrder(orderData);
reservationService.MakeReservations(orderData);
ts.Complete();
}
}
}
Problem is, that it won't work correctly if IRepositoryFactory lifestyle is transient (because you would get two different datacontexts and that would require distributed transactions to be enabled, which we try to avoid). Any ides how to write this correctly?
My observations:
In general, factories should be singletons. If your factory isn't a singleton, then you are probably just hiding another factory behind it.
Factories are meant for creating objects on demand. Your code simply creates a repository in the constructor, so I don't really see the difference between that and simply making the repository a direct injection parameter in the constructor.
These all seem to me like a workarounds around a more fundamental problem (described in your first question) and these workarounds only make the problem more complicated. Unless you solve the root problem you will end up with a complex dependency schema and a smelly code.
IMO - this is a Distributed Transaction scenario.
In the example you mentioned, OrderService & ReservationService use the same data context is an implementation detail hidden in the code.
I don't think it is correct to pass this knowledge up to the CompositionService by wrapping the service calls in a TransactionScope as now the composition service is aware of the shared data context & so needs to use a TransactionScope to run the code correctly.
In my opinion, the composition service code should look like:
try{
if(orderService.TryCreateOrder(orderData)){
if(reservationService.TryMakeReservation(orderData)){
reservationService.Commit();
orderService.Commit();
}
else{
orderService.TryRollbackOrder(orderData);
throw new ReservationCouldNotBeMadeException();
}
}
else{
throw new OrderCouldNotBeCreatedException();
}
}
catch(CouldNotRollbackOrderServiceException){
// do something here...
}
catch(CouldNotCommitServiceException){
// do something here...
}
In this case, the OrderService.TryCreateOrder method will insert an Order with a PendingReservation status or some other relevant status which indicates that the Order is inserted, but not completed. This state will change on the commits are called on the services (UnitOfWork pattern?)
In this case, the implementation details of the services are completely hidden from the consumer of the service, while composition is also possible, independent on the underlying implementation detail.
HTH.
I'm working with single-tier, single-user applications, with FluentNHibernate. With multiple threads, triggered by time triggers and incoming socket message triggers.
What requirements will determine if I can create/dispose the ISession inside each method of the repositories, or if I need to maintain the ISession lifecycle over multiple calls, maybe from program start to end?
For example, does lazy-load require session to be maintained? And if I don't use lazyload, for what other reason should I maintain the ISession?
Currently my repository methods look like below, but I wonder if I'm doing it wrong..
public class ProductRepository
{
public void Delete(Product product)
{
using (ISession session = FNH_Manager.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
session.Delete(product);
transaction.Commit();
}
}
}
class FNH_Manager
{
private static Configuration cfg;
private static ISessionFactory sessionFactory;
public static void ConfigureSessionFactory()
{
sessionFactory = CreateSessionFactory();
}
public static ISession OpenSession()
{
return sessionFactory.OpenSession();
}
EDIT1:
Attempt to handle "session per call":
public class EmployeeRepository
{
public static void Delete(Employee employee)
{
using (ISession session = FNH_Manager.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
if (Employee.Id != 0)
{
var emp = session.Get(typeof(Employee), employee.Id);
if (emp != null)
{
session.Delete(emp);
transaction.Commit();
}
}
}
}
}
The session must be open when you reference a lazy-loaded field, so if you're relying on lazy-loading outside of your repository you'll need to manage the session lifespan somewhere higher up.
If you don't use lazy-loading, there's also the matter of whether you need to support multiple actions in one transaction. For example, if you delete a product AND some other data in one go, you'd want that to happen in one transaction in the same session (otherwise you might delete the product, have some code throw some exception, and never delete the other data, which may end up with orphan records or a corrupt state in your database).
I think you should use UnitOfWork pattern per thread.
On thread start create ISession and initialize UnitOfWork with it. Repositories use UnitOfWork with that signle ISession. At the end of thread execution commit the changes or rollback if there was conflict with other threads.
The Product is not associated with any session when beeing deleted. It is a so called detached object. To use it within the session for example deleting it you need to first associate it with the currently opened session. There are several ways to achive this:
Keep the session open. If the same session is opened when the Product is loaded as when it is deleted, it will work fine.
Reload the object, but using ISession.Get() or ISession.Load().
Re-attach the object to the newly opened session session with ISession.Lock()
Otherwise you'll probably get StaleStateExceptions and the like.
Remeber to read up on the NHibernate documentation
I am using the Entity framework for the first time, and would like to know if I am using in the best practice.
I have created a separate class in my business logic which will handle the entity context. the problem I have, is in all the videos I have seen they usually wrap the context in a using statement to make sure its closed, but obviously I can't do this in my business logic as the context will be closed before I can actually use it?
So is this ok what I'm doing? A couple of examples:
public IEnumerable<Article> GetLatestArticles(bool Authorised)
{
var ctx = new ArticleNetEntities();
return ctx.Articles.Where(x => x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public IEnumerable<Article> GetArticlesByMember(int MemberId, bool Authorised)
{
var ctx = new ArticleNetEntities();
return ctx.Articles.Where(x => x.MemberID == MemberId && x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
I just want to make sure I'm not building something that's going to die when a lot of people use it?
It really depends on how to want to expose your repository/data store.
Not sure what you mean by "the context will be closed, therefore i cannot do business logic". Do your business logic inside the using statement. Or if your business logic is in a different class, then let's continue. :)
Some people return concrete collections from their Repository, in which case you can wrap the context in the using statement:
public class ArticleRepository
{
public List<Article> GetArticles()
{
List<Article> articles = null;
using (var db = new ArticleNetEntities())
{
articles = db.Articles.Where(something).Take(some).ToList();
}
}
}
Advantage of that is satisfying the good practice with connections - open as late as you can, and close as early as you can.
You can encapsulate all your business logic inside the using statement.
The disadvantages - your Repository becomes aware of business-logic, which i personally do not like, and you end up with a different method for each particular scenario.
The second option - new up a context as part of the Repository, and make it implement IDisposable.
public class ArticleRepository : IDisposable
{
ArticleNetEntities db;
public ArticleRepository()
{
db = new ArticleNetEntities();
}
public List<Article> GetArticles()
{
List<Article> articles = null;
db.Articles.Where(something).Take(some).ToList();
}
public void Dispose()
{
db.Dispose();
}
}
And then:
using (var repository = new ArticleRepository())
{
var articles = repository.GetArticles();
}
Or the third-option (my favourite), use dependency injection. Decouple all the context-work from your Repository, and let the DI container handle disposal of resources:
public class ArticleRepository
{
private IObjectContext _ctx;
public ArticleRepository(IObjectContext ctx)
{
_ctx = ctx;
}
public IQueryable<Article> Find()
{
return _ctx.Articles;
}
}
Your chosen DI container will inject the concrete ObjectContext into the instantiation of the Repository, with a configured lifetime (Singleton, HttpContext, ThreadLocal, etc), and dispose of it based on that configuration.
I have it setup so each HTTP Request gets given a new Context. When the Request is finished, my DI container will automatically dispose of the context.
I also use the Unit of Work pattern here to allow multiple Repositories to work with one Object Context.
You may have also noticed I prefer to return IQueryable from my Repository (as opposed to a concrete List). Much more powerful (yet risky, if you don't understand the implications). My service layer performs the business logic on the IQueryable and then returns the concrete collection to the UI.
That is my far the most powerful option, as it allows a simple as heck Repository, the Unit Of Work manages the context, the Service Layer manages the Business Logic, and the DI container handles the lifetime/disposal of resources/objects.
Let me know if you want more info on that - as there is quite a lot to it, even more than this surprisingly long answer. :)
I would have the ctx as a private variable within each class, then create a new instance of this each time and then dispose when finished.
public class ArticleService
{
private ArticleEntities _ctx;
public ArticleService()
{
_ctx = new ArticleEntities();
}
public IEnumerable<Article> GetLatestArticles(bool Authorised)
{
return _ctx.Articles.Where(x => x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public IEnumerable<Article> GetArticlesByMember(int MemberId, bool Authorised)
{
return _ctx.Articles.Where(x => x.MemberID == MemberId && x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public void Dispose()
{
_ctx.Dispose();
_ctx = null;
}
}
Then when calling this.
ArticleService articleService = new ArticleService();
IEnumerable<Article> article = articleService.GetLatestArticles(true);
articleService.Dispose(); // killing the connection
This way you can also add/update other objects within the same context and call a save method which saves any changes to the db through the Entity.
In my experience this code is not good, because you lose the capacity to navigate relationships through navigation properties.
public List<Articles> getArticles( ){
using (var db = new ArticleNetEntities())
{
articles = db.Articles.Where(something).ToList();
}
}
Using this approach you can't use the following code because a.Members is always null( db context is close and cant get data automatically).
var articles = Data.getArticles();
foreach( var a in articles ) {
if( a.Members.any(p=>p.Name=="miki") ) {
...
}
else {
...
}
}
}
Using only a global db context is a bad idea because you must use a delete changes function
in a point of your application yo do this but don't save changes and close the window
var article= globalcontext.getArticleByID(10);
article.Approved=true;
then in another point of application you make some operation and save
//..... something
globalcontext.saveChanges();
in this case previous article approved property is set to modified by entity framework. When you save, approved is set true!!!
Best approach for me is use 1 context per class
You can pass context to another external method if you need
class EditArticle {
private DbEntities de;
private currentAricle;
public EditArticle() {
de = new DbEntities; //inizialize on new istance
}
loadArticleToEdit(Articele a){
// a is from another context
currentArticle= de.Article.Single(p=>p.IdArticle==a.IdArticle){
}
private saveChanges(){
...
pe.saveChanges();
}
}
What you can also do is store your context at a higher level.
E.g., you can have a static class storing the current context:
class ContextManager
{
[ThreadStatic]
public static ArticleEntities CurrentContext;
}
Then, somewhere outside you do something like this:
using (ContextManager.CurrentContext = new ArticleEntities())
{
IEnumerable<Article> article = articleService.GetLatestArticles(true);
}
Then, inside the GetLastestArticles, you just use the same ContextManager.CurrentContext.
Of course, this is just the basic idea. You can make this a lot more workable by using service providers, IoC and such.
You can start preparing Entity Framework from data access layer by creating a generic repository class for all required Entity Framework functions. Then you can used it in Business layer (Encapsulated)
Here are the best practices that I have used for Entity Framework in data, business, and UI layers
Techniques used for this practice:
Applying SOLID architecture principles
Using Repository design pattern
Only one class to go (and you will find it ready)