I'm using a very simple asp.net mvc application with Entity Framework 6.0.2, .Net 4.5.1:
public class HomeController : Controller
{
public ActionResult Index()
{
int count;
using (var db = new LocalContext())
{
count = db.Counters.Count();
}
return View(count);
}
}
public class Counter
{
public int Id { get; set; }
}
public class LocalContext : DbContext
{
public DbSet<Counter> Counters { get; set; }
}
If I do a load test on it, I eventually get an Out Of Memory Exception. (tinyget -srv:localhost -port:<port> -uri:/home/index/ -threads:30 -loop:5000). In Performance monitor I see the generation 2 Heap steadily grow. If I use a smaller loop value (say 500), the size grows until tinyget stops. Then the heap size stays the same (for at least 20 minutes, after that I stopped the server).
What am I doing wrong?
EDIT
So I tried Simon Mouriers suggestion and left out the EF code. Then I don't have memory problems. So I thought, maybe if I use Release instead of Debug, it will make a difference. And it did! Memory was released after a while and I could put high load on the site. Then I switched back to Debug to see if I could get more info and... even in Debug mode no problems anymore. FML, I worked a day on it and now I can't reproduce it anymore.
In your case the internally managed class that inherits from DbContext would then need to implement IDisposable and inside of the LocalContext add the following:
public void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// Manage any native resources.
}
//Handle any other cleanup.
}
Without specifically overriding the call to dispose, the using statement is only going to call Dispose() against the base class, while you need to dispose of the parent and base.
I don't see anything wrong with your code. Maybe this could be an issue with the underlying ADO.NET provider. Which database are you using?
I remember having issues with some unit test that did not release SQLite database files which I eventually solved with this code (in my DbContext class)
public class LocalContext : DbContext
{
protected override void Dispose(bool disposing)
{
var connection = this.Database.Connection;
base.Dispose(disposing);
connection.Dispose();
}
}
May be unrelated but I would give it a try.
This might not be the correct answer, but i suggest to keep your context managed by a IoC container. And add it with TrasientScope, or PerHttpRequest scope(example not provided due to a large varierty of ioc container syntax). If you want a specfic example, please reply for what DI you want
I would go for creating a class connection to the DB ..
public class DBconnection : IDisposable
{
private ChatEntities _db = new ChatEntities();
protected ChatEntities Db {
get { return _db; }
}
public void Dispose()
{
if (_db != null)
{
_db.Dispose();
}
}
}
Then when you would like to connect and manipulate .. Lets call it the DBlogic class ..
public class DBlogic : DBconnection
{
internal void WriteToDB(String str){
//Do something ...
Db.SaveChanges();
}
}
This will eventually cause the Dispose to empty resources.. plus its cleaner .. at least for my eyes :D
Actually, the OutOfMemotyException is normal in this situation since the Garbage Collector does not occur immediately after you've finished with the object. In this scenario, you need to use GC.Collect() to perform a collection on all generations of memory and reclaim all memory that is inaccessible, immediately.
public class HomeController : Controller
{
public ActionResult Index()
{
int count;
using (var db = new LocalContext())
{
count = db.Counters.Count();
}
GC.Collect();
return View(count);
}
}
Note that you should not use GC.Collect() in production code since it interferes with the Garbage Collection mechanism.
Related
just wondering if I dispose my dbcontext object correctly here or should I be using the using block instead?
public class RepoBankAccount : IBankAccount
{
private AppDbContext db = null;
public RepoBankAccount()
{
this.db = new AppDbContext();
}
public RepoBankAccount(AppDbContext db)
{
this.db = db;
}
public IEnumerable<BankAccount> ViewAllBankAccount()
{
return db.BankAccounts.ToList();
}
public BankAccount ViewBankAccount(long accountNumber)
{
return db.BankAccounts.Where(b => b.AccountNumber.Equals(accountNumber)).SingleOrDefault();
}
public void DeleteBankAccount(BankAccount bankAccount)
{
db.BankAccounts.Remove(bankAccount);
Save();
}
public void InsertBankAccount(BankAccount bankAccount)
{
db.BankAccounts.Add(bankAccount);
Save();
}
public void Save()
{
try
{
db.SaveChanges();
}
catch(Exception ex)
{
System.Console.WriteLine("Error:" + ex.Message);
}
finally
{
if(db != null)
db.Dispose();
}
}
}
I read that I should not be calling dispose manually from
https://softwareengineering.stackexchange.com/questions/359667/is-it-ok-to-create-an-entity-framework-datacontext-object-and-dispose-it-in-a-us
But in some sample code, I also notice this scaffolding code but not too clear how it does the job on its own.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
DbContexts are designed to be short-lived. The very first initialization and use of a DbContext presents a spin up cost to resolve the entity mappings, but aside from that the context can be scoped to individual calls, or sets of calls. Your code will work fine and so long as your repo is disposed, the dbContext will be cleaned up. There are pitfalls with this approach though in that as the product matures it is easy to forget to dispose something, and these DbContexts can soak up a fair bit of memory if they are long-lived.
To avoid issues with entities that become disconnected from their DbContext, an entity should never leave the scope of it's DbContext. If it does, you run into errors if a lazy load gets triggered for example.
For instance lets say I have a method in a Controller or such that does something like this:
(Note: I don't advocate ever returning Entities to a view, but for example's sake...)
public ActionResult View(long accountNumber)
{
BankAccount bankAccount;
using (var repo = new RepoBankAccount())
{
bankAccount = repo.ViewBankAccount(accountNumber);
}
return new View(bankAccount);
}
The repo will be disposed, and if bank account either has no references, or all references are eager loaded, this call would work just fine. However, if there is a lazy load call, the controller method will fail because the DbContext associated with the Bank Account was disposed.
This can be compensated for by ensuring the return occurs inside the scope of the using block:
public ActionResult View(long accountNumber)
{
using (var repo = new RepoBankAccount())
{
BankAccount bankAccount = repo.ViewBankAccount(accountNumber);
return new View(bankAccount);
}
}
To help avoid issues like this, it is generally a better idea to create POCO view model classes to populate within the scope of the DbContext from the entities, then return those view models. No surprise lazy load hits etc.
Where this really starts to crumble apart is when you want to coordinate things like updates across entities to ensure that updates are committed or rolled back together. Each of your repo classes are going to have separate DbContext instances.
The first default approach to get familiar with to address this is Dependency Injection and Inversion of Control, particularly an IoC container such as Autofac, Unity, Ninject, or Castle Windsor. Using these, you can have your repository classes accept a dependency on a DbContext, and they can scope a single instance of a Dependency across a lifetime. (such as per HTTP Request for example) In this way, the references of all of your repositories in a single session call will be provided the same DbContext instance. A call to SaveChanges() will attempt to commit all pending changes.
A better pattern is the Unit of Work pattern where the scope of the DbContext is moved outside of the repository and each repository is either provided a reference to the DbContext, or can locate it. (similar to how the IoC pattern works) The advantage of UoW patterns is that you can move control of the commit/rollback out to the consumer of the repositories I promote the use of Mehdime's DbContextScope since it negates the need to pass around references to the UoW/DbContext.
Mehdime DbContextScope
(EF6 original github)
EFCore supported Port
I'm running into what appears to be a fairly typical problem in EF. As I continue accessing my context, the number of items it tracks and enumerates over in detect changes increases. Eventually, everything slows to a crawl. Here's what I'm currently doing to address the problem:
public class ContextGenerator
{
private IContext _context;
private string _connString;
private int accessCount;
public ContextGenerator(string conn)
{
_connString = conn;
}
public IContext Instance
{
get
{
if (accessCount > 100)
{
Dispose();
}
if (_context == null)
{
var conn = EntityConfigurationContext.EntityConnection(_connString);
_context = new MyDbContext(conn);
accessCount = 0;
}
++accessCount;
return _context;
}
}
public void Dispose()
{
_context.Dispose();
_context = null;
}
}
This mostly works to prevent my context from getting too unwieldy, as it disposes and creates a new one every 100 accesses, but it seems very clunky and messy. Furthermore, 100 was chosen arbitrarily and there's no guarantee that somebody won't insert a million things with only one access. Is there a way to instead ask the context itself if it's gotten "too big"?
Or if anyone has a better idea for tackling this problem, I'm open to suggestions.
Each context should be a single Unit of Work, therefore I would highly recommend you have 1 context per operation (unless you relly have to).
For more info on what EF is currently tracking checkout Context.ChangeTracker
I'm having an issue in Entity Framework 6 where an exception is consistently thrown. For the most part the application works perfectly fine until I try adding a user to a role via a linking table.
The error being thrown is the following:
The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects.
The functionality will happily add the user to the role virtually but as soon as SaveChanges() is called the process falls over.
I'm aware of the how and why for the above error and after doing some research it's due to the the context not being disposed of correctly. So following on from that and looking into the DbContext setup I've realised IDisposable wasn't added to the configuration. Unfortunately, no matter what I've tried incorporating IDisposable at any point within the application still doesn't dispose of the contexts correctly.
So after spending a fair bit of time and having no luck via Google I'm wondering if any of you have a solution or are able to point me in the right direction.
The following is a cutdown version of the Data Layer classes I've implemented:
public class GenericRepository<T> : WebsiteContext, IGenericRepository<T> where T : class
{
public virtual void Commit()
{
SaveChanges();
}
public virtual void Delete(int id)
{
var record = Set<T>().Find(id);
if (record == null)
throw new Exception("Some Message");
Set<T>().Remove(record);
}
// ... ETC
}
public interface IGenericRepository<T> where T : class
{
void Commit();
// ... ETC
}
public class WebsiteContext : DbContext, IWebsiteContext
{
static WebsiteContext()
{
Database.SetInitializer<WebsiteContext>(null);
}
public WebsiteContext() : base("Name=WebsiteContext") { }
public IDbSet<User> Users { get; set; }
// ... ETC
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
// ... ETC
}
}
This implementation is loosely based around the following Stackoverflow question.
Entity Framework 6 Code First - Is Repository Implementation a Good One?
The following is a condensed version of the Service Layer class and method which is causing the issue.
private IGenericRepository<User> _userRepository;
private IGenericRepository<ApplicationUserSetting> _userSettingRepository;
private IGenericRepository<ApplicationRole> _roleRepository;
public UserManagementService()
{
_userRepository = new GenericRepository<User>();
_roleRepository = new GenericRepository<ApplicationRole>();
_userSettingRepository = new GenericRepository<ApplicationUserSetting>();
}
public void AssignUserRole(AssignRoleModel model)
{
var user = _userRepository.GetById(model.UserId);
if (user == null)
return;
var role = _roleRepository.GetById(model.RoleId);
if (role == null)
return;
user.Roles.Add(role);
_userRepository.Commit();
}
The issue, just like the error states, is because you have multiple instances of the type DbContext fetching your entities for you. Each fetched entity is then associated with the DbContext instance that retrieved it. If you want to persist changes to these entities it has to occur on the DbContext instance that it is associated with OR you have to attach it to the DbContext instance it is not associated with.
If you are trying to keep it simple I recommend you implement a DI framework like AutoFac. You can then have a single DbContext instance created per request and have it injected everywhere you need it. It will allow you to keep your existing structure (I am not going to comment on that as I consider that out of scope for this question), the end result would be that each injected GenericRepository instance has an injected WebsiteContext instance but the WebsiteContext instances are shared (all the same instance). The upside of that is no more error but the downside is you do have to be aware that any changes to any entities will result in those changes being persisted as soon as you execute the Save functionality.
Using multiple repositories causes the issue. Just use one repository (= one db context) and have different methods for getting the individual types.
E.g. _repository.Get(id)
It's way out of scope to point out how your current implementation could be made to work, but if you did want to use more than one context, you can despite what others have said.
If you do, you will have to detach the entity from the previous context first.
I'm using entity framework 6 and Autofac in my web application.
I inject unit of work with DbContext inside, both externally owned so I can dispose them myself.
DbContext registered PerLifetimeScope,
Unit of work is a factory, therefore registered as per dependency.
When Executing the first http Get action everthing works fine and I see the unit of work with the context are disposed after the response is coming from the db which is great.
My issue is that whenever I execute a second request, the context for some reason is disposed before I return an IQueryable. Therefore I get an execption saying:
The operation could not be executed because the DbContext is disposed.
For example - calling the GetFolders method works the first time, and afterwards fails..
I see the context is disposed too early, what I don't understand is what triggers it too soon in the second request..
public interface IUnitOfWork : IDisposable
{
bool Commit();
}
public EFUnitOfWork : IUnitOfWork
{
public IRepository<Folder> FoldersRepository {get; set;}
public IRepository<Letter> LettersRepository {get; set;}
private readonly DbContext _context;
public EFUnitOfWork(DbContext context, IRepository<Folder> foldersRepo, IRepository<Letter> lettersRepo)
{
_context = context;
_foldersRepo = foldersRepo;
LettersRepository = lettersRepo;
}
private bool disposed = false;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
_context.Dispose();
}
disposed = true;
}
}
public bool Commit()
{
try
{
return SaveChanges() > 0;
}
catch (DbEntityValidationException exc)
{
// just to ease debugging
foreach (var error in exc.EntityValidationErrors)
{
foreach (var errorMsg in error.ValidationErrors)
{
logger.Log(LogLevel.Error, "Error trying to save EF changes - " + errorMsg.ErrorMessage);
}
}
return false;
throw exc;
}
}
}
public class Repository<T> : IRepository<T>
{
protected readonly DbContext Context;
protected readonly DbSet<T> DbSet;
public EFRepository(DbContext context)
{
Context = context;
}
public IQueryable<T> Get()
{
return DbSet;
}
public void Add(T item)
{
DbSet.Add(item);
}
public virtual Remove(T item)
{
DbSet.Remove(item);
}
public void Update(T item)
{
Context.Entry(item).State = EntityState.Modified;
}
public T FindById(int id)
{
return DbSet.Find(id);
}
}
public class DataService : IDataService
{
private Func<IUnitOfWork> _unitOfWorkFactory;
public (Func<IUnitOfWork> unitOfWorkFactory)
{
_unitOfWorkFactory = unitOfWorkFactory;
}
public List<FolderPreview> GetFolders()
{
using(unitOfWork = _unitOfWorkFactory())
{
var foldersRepository = unitOfWork.FoldersRepository;
var foldersData = foldersRepository.Get().Select(p => new FolderPreview
{
Id = p.Id,
Name = p.Name
}).ToList();
return foldersData;
}
}
}
public class FolderPreview
{
public int Id {get; set;}
public string Name {get; set;}
}
Startup code:
{
_container.RegisterGeneric<IRepository<>,Repository<>>().InstancePerLifetimeScope();
_container.RegisterType<IDataService, DataService>().SingleInstance();
_container.RegisterType<EFUnitOfWork, IUnitOfWork>().PerDepnendecny().ExternallyOwned();
_container.RegisterType<DbContext, MyDbContext>().InstancePerLifetimeScope().ExternallyOwned();
}
Is this related to singletons some how? Almost all of my application is singletons, the DataService is also Singleton. Anyone?
Thanks!
The problem is that you are instancing only one Repository and one DbContext per request, but are instancing one new IUnitOfWork every time.
So when you call GetFolders you are creating a new IUnitOfWork and disposing it (which disposes the DbContext -on IUnitOfWork.Dispose()-): so when you call GetFolders again, when you create a second IUnitOfWork, since it's the same lifetime scope, it's injecting the already-created repository and the already-created DbContext, which is disposed (the container doesn't try to create a new instance since you are on the same lifetime scope)...
So on the second call, your Repository and IUnitOfWork are trying to use the disposed instance of DbContext, thus the error you are seeing.
As a solution, you can just not dispose the DbContext on IUnitOfWork, and dispose it only at the end of your request... or you could even not dispose it at all: this may sound strange, but check this post
I'm copying the important part in case the link goes dead, by Diego Vega:
The default behavior of DbContext is that the underlying connection is automatically opened any time is needed and closed when it is no longer needed. E.g. when you execute a query and iterate over query results using “foreach”, the call to IEnumerable.GetEnumerator() will cause the connection to be opened, and when later there are no more results available, “foreach” will take care of calling Dispose on the enumerator, which will close the connection. In a similar way, a call to DbContext.SaveChanges() will open the connection before sending changes to the database and will close it before returning.
Given this default behavior, in many real-world cases it is harmless to leave the context without disposing it and just rely on garbage collection.
That said, there are two main reason our sample code tends to always use “using” or dispose the context in some other way:
The default automatic open/close behavior is relatively easy to override: you can assume control of when the connection is opened and closed by manually opening the connection. Once you start doing this in some part of your code, then forgetting to dipose the context becomes harmful, because you might be leaking open connections.
DbContext implements IDiposable following the recommended pattern, which includes exposing a virtual protected Dispose method that derived types can override if for example the need to aggregate other unmanaged resources into the lifetime of the context.
So basically, unless you are managing the connection, or have a specific need to dispose it, it's safe to not do it.
I'd still recommend disposing it, of course, but in case you don't see where it'd be a good time to do it, you may just not do it at all.
What would be a better approach for an xml-based repository:
1) Save changes to the underlying xml document on each call to the repository...
public class XmlRepository1
{
private XDocument xDocument;
public void CrudOp()
{
// Perform CRUD operation...
// Call Save()
xDocument.Save(path);
}
}
or
2) Provide the end-user with a SaveChanges() method...
public class XmlRepository2
{
private XDocument xDocument;
public void CrudOp()
{
// Perform CRUD operation...
// DON'T call save
}
// Provide a SaveChanges() method to the end-user...
public void SaveChanges()
{
xDocument.Save(path);
}
}
My inclination leans towards option 1, because providing a SaveChanges() method doesn't really seem like a repositories responsibility. However, I'm second-guessing this decision for a couple of reasons:
a) In a multi-threaded environment, this gives the end-user an easy way to roll-back changes should a call to the repository fail, leaving objects in a partially-mutated state.
b) Option 2 provides a "batch-like" paradigm, which I can see as being more flexible for a variety of reasons.
Consider adding some sort of transaction support (close to your second apporach).
public class XmlRepository2
{
public void CrudOp()
{
// DON'T call save
}
public MakeTransacedChanges(Action<XmlRepository2> makeChanges)
{
try{
makeChanges(this);
saveChanges();
}
catch (RepositoryException e)
{
//revert changes
}
}
private void saveChanges()
{
xDocument.Save(path);
}
}
I prefer to have separate Save method in repository to have a chance to revert my changes if something will go wrong.
I found this article Repositories and the Save Method. Hope it will help.