We have an issue with some of our ASP.Net applications. Some of our apps claim a large amount of memory from start as their working set.
On our 2 webfarm-servers (4gb of RAM each) run multiple applications. We have a stable environment with about 1.2gb of memory free.
Then we add an MVC5 + WebApi v2 + Entity Framework app that instantly claims 1+gb as working set memory, while only actually using about 300mb. This causes other applications to complain that there is not enough memory left.
We already tried setting limit for virtual memory and the limit for private memory, without any avail. If we set this to about 500mb, the app still uses more or less the same amount of memory (way over 500) and does not seem to respect the limits put in place.
For reference, I tested this with an empty MVC5 project (VS2013 template) and this already claims 300mb of memory, while only using about 10mb.
Setting the app as a 32-bit app seems to have some impact in reducing the size of the working set.
Is there any way to reduce the size of the working set, or to enforce a hard limit on the size of it?
Edit:
In the case of the huge memory use for the project using Web Api v2 and Entity Framework, my API controllers look like this:
namespace Foo.Api
{
public class BarController : ApiController
{
private FooContext db = new FooContext();
public IQueryable<Bar> GetBar(string bla)
{
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year);
}
}
as they look in most tutorials I could find (including the ones from microsoft). Using using here does not work because of LINQ deferred loading. It could work if I added a ToList (not tested) everywhere, but does this have any other impact?
edit2:
It works if I do
namespace Foo.Api
{
public class BarController : ApiController
{
public List<Bar> GetBar(string bla)
{
using(FooContext db = new FooContext){
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year).ToList();
}
}
}
Does the ToList() have any implications on the performance of the api? (I know I can not continue querying cheaply as with an IQueryable)
Edit3:
I notice that its the private working set of the app that is quite high. Is there a way to limit this? (Without causing constant recycles)
Edit4:
As far as I know I have a Dispose on each and every APIController. My front-end is just some simple MVC controllers but for the large part .cshtml and javascript (angular) files.
We have another app, just regular mvc, with two models and some simple views (no database, or other external resources that could be leaked) and this also consumes up to 4-500mb of memory. If I profile it, I can't see anything that indicates memory leaks, I do see that only 10 or 20 mb is actually used, the rest is unmanaged memory that is unassigned (but part of the private memory working set, so claimed by this app and unusable by any other).
I had a similar problem with some of my applications. I was able to solve the problem by properly closing the disposable database resources by wrapping them in using clauses.
For Entity Framework, that would mean to ensure you always close your context after each request. Connections should be disposed between requests.
using (var db = new MyEFContext())
{
// Execute queries here
var query = from u as db.User
where u.UserId = 1234
select u.Name;
// Execute the query.
return query.ToList();
// This bracket will dispose the context properly.
}
You may need to wrap the context into a service that request-caches your context in order to keep it alive throughout the request, and disposes of it when complete.
Or, if using the pattern of having a single context for the entire controller like in the MSDN examples, make sure you override the Dispose(bool) method, like the example here.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
So your controller (from above) should look like this:
namespace Foo.Api
{
public class BarController : ApiController
{
private FooContext db = new FooContext();
public IQueryable<Bar> GetBar(string bla)
{
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year);
}
// WebApi 2 will call this automatically after each
// request. You need this to ensure your context is disposed
// and the memory it is using is freed when your app does garbage
// collection.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
}
}
The behavior I saw was that the application would consume a lot of memory, but it could garbage collect enough memory to keep it from ever getting an OutOfMemoryException. This made it difficult to find the problem, but disposing the database resources solved it. One of the applications used to hover at around 600 MB of RAM usage, and now it hovers around 75 MB.
But this advice doesn't just apply to database connections. Any class that implements IDisposable should be suspect if you are running into memory leaks. But since you mentioned you are using EntityFramework, it is the most likely suspect.
Removing all Telerik Kendo MVC references (dll and such) fixed our problems. If we run the application without, all our memory problems are gone and we see normal memory use.
Basically: it was an external library causing high memory use.
Related
I have ran into a problem where a long running singleton added to HttpApplicationState which does some data masking (GDPR), stops masking data after running in the background for some time.
It's hard to debug because it only happens in our UAT enviroment and it usually happens overnight.
The problem is that the data masking library is third party, and is still work in progress (or at the end of that work in progress).
But I'd appreciate if anyone with better GC knowledge could look at the init code below, and confirm this is out of GC domain.
The Translator.GetInstance() is a Lazy loader of the GDPR masking/translation singleton. So it's initialized the first time the user masks/unmasks the data.
protected void Application_Start()
{
if (Translator)
{
Application["MaskDataUtility"] = new MaskDataUtility(Translator.GetInstance());
}
else
{
Application["MaskDataUtility"] = new MaskDataUtility(new CustomTranslator());
}
}
We've an application which writes records in a SQL Server CE (version 3.5) / (MySQL 5.7.11 community) database table [depending on the configuration] through the use of NHibernate (version 3.1.0.4000).
The method which performs the save to the database table has the following structure, so everything should be disposed correctly
using (ISession session = SessionHelper.GetSession())
using (ITransaction txn = session.BeginTransaction())
{
session.Save(entity);
txn.Commit();
}
After about a week of heavy work (where several hundred thousands records have been written) the application stops working and throws an out of memory error.
Then:
With SQL Server CE the database gets corrupted and needs to be manually repaired
With MySQL the mysqld daemon is terminated and it needs to be restarted
We've been monitoring the application memory usage through ANTS Memory Profiler (with SQL CE configuration), but, to our surprise, the application "private bytes" doesn't seem to increase at all - this is reported both by ANTS and by the RESOURCE MANAGER.
Still, when the application is forced close (after such error shows up) the "physical memory usage" in the task manager falls from about 80% right down to 20-30%, and I'm again able to start other processes without getting another out of memory exception.
Doing some research, I've found this:
What is private bytes, virtual bytes, working set?
I quote the last part about private bytes:
Private Bytes are a reasonable approximation of the amount of memory
your executable is using and can be used to help narrow down a list of
potential candidates for a memory leak; if you see the number growing
and growing constantly and endlessly, you would want to check that
process for a leak. This cannot, however, prove that there is or is
not a leak.
Considering the rest of the linked topic, for what I understand, "private bytes" may or may not contain the memory allocated by linked unmanaged dlls, so:
I configured ANTS to also report information about unmanaged memory (Unmanaged memory breakdown by module section) and I've noticed that one of the 2 following modules (depending on a specific sessionfactory setting) take up more and more space (with a ratio that's compatible with the computer running out of memory in about a week):
sqlceqp35
MSVCR120
Given the current results, I'm planning the following tests:
update nhibernate version
trying to further analyze current nhibernate sessionhelper configuration
creating an empty console application without WPF user interface (yes, this application uses WPF) where I put more and more code until I'm able to reproduce the issue
Any suggestion?
EDIT 30/06/2016:
Here's the session factory initialization:
SESSION FACTORY: (the custom driver is to avoid trucation after 4000 chars)
factory = Fluently.Configure()
.Database(MsSqlCeConfiguration.Standard.ConnectionString(connString)
.ShowSql()
.MaxFetchDepth(3)
.Driver<MySqlServerCeDriver>()) // FIX truncation 4000 chars
.Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly()))
.ProxyFactoryFactory<NHibernate.ByteCode.LinFu.ProxyFactoryFactory>()
.ExposeConfiguration(c =>
{
c.SetProperty("cache.provider_class", "NHibernate.Cache.HashtableCacheProvider");
c.SetProperty("cache.use_query_cache", "true");
c.SetProperty("command_timeout", "120");
})
.BuildSessionFactory();
public class MySqlServerCeDriver : SqlServerCeDriver
{
protected override void InitializeParameter(
IDbDataParameter dbParam,
string name,
SqlType sqlType)
{
base.InitializeParameter(dbParam, name, sqlType);
if (sqlType is StringClobSqlType)
{
var parameter = (SqlCeParameter)dbParam;
parameter.SqlDbType = SqlDbType.NText;
}
}
}
EDIT 07/07/2016
As requested, the GetSession() does the following:
public static ISession GetSession()
{
ISession session = factory.OpenSession();
session.FlushMode = FlushMode.Commit;
return session;
}
If I just browse some pages on the app, it sits at around 500MB. Many of these pages access the database but at this point in time, I only have roughly a couple of rows each for 10 tables, mostly storing strings and some small icons that are less than 50KB.
The real problem occurs when when I download a file. The file is roughly 140MB and is stored as a varbinary(MAX) in the database. The memory usage suddenly rises to 1.3GB for a split second and then falls back to 1GB. The code for that action is here:
public ActionResult DownloadIpa(int buildId)
{
var build = _unitOfWork.Repository<Build>().GetById(buildId);
var buildFiles = _unitOfWork.Repository<BuildFiles>().GetById(buildId);
if (buildFiles == null)
{
throw new HttpException(404, "Item not found");
}
var app = _unitOfWork.Repository<App>().GetById(build.AppId);
var fileName = app.Name + ".ipa";
app.Downloads++;
_unitOfWork.Repository<App>().Update(app);
_unitOfWork.Save();
return DownloadFile(buildFiles.Ipa, fileName);
}
private ActionResult DownloadFile(byte[] file, string fileName, string type = "application/octet-stream")
{
if (file == null)
{
throw new HttpException(500, "Empty file");
}
if (fileName.Equals(""))
{
throw new HttpException(500, "No name");
}
return File(file, type, fileName);
}
On my local computer, If I don't do anything, the memory usage stays at 1GB. If I then go back and navigate to some pages, it falls back down to 500MB.
On the deployment server, it stays at 1.6GB after the first download no matter what I do. I can force the memory usage to increase by continually downloading files until it reaches 3GB, where it drops back down to 1.6GB.
In every controller, I have overriden the Dispose() method as so:
protected override void Dispose(bool disposing)
{
_unitOfWork.Dispose();
base.Dispose(disposing);
}
This refers to:
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
public void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
_context.Dispose();
}
}
_disposed = true;
}
So my unit of work should be disposed every time the controller is disposed. I am using Unity and I register the unit of work with a Heirarchical Lifetime Manager.
Here are a few of screenshots from the Profiler:
I believe this could be the problem or I am going down the wrong track. Why would Find() use 300MB?
EDIT:
Repository:
public class Repository<TEntity> : IRepository<TEntity> where TEntity : class
{
internal IDbContext Context;
internal IDbSet<TEntity> DbSet;
public Repository(IDbContext context)
{
Context = context;
DbSet = Context.Set<TEntity>();
}
public virtual IEnumerable<TEntity> GetAll()
{
return DbSet.ToList();
}
public virtual TEntity GetById(object id)
{
return DbSet.Find(id);
}
public TEntity GetSingle(Expression<Func<TEntity, bool>> predicate)
{
return DbSet.Where(predicate).SingleOrDefault();
}
public virtual RepositoryQuery<TEntity> Query()
{
return new RepositoryQuery<TEntity>(this);
}
internal IEnumerable<TEntity> Get(
Expression<Func<TEntity, bool>> filter = null,
Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null,
List<Expression<Func<TEntity, object>>> includeProperties = null)
{
IQueryable<TEntity> query = DbSet;
if (includeProperties != null)
{
includeProperties.ForEach(i => query.Include(i));
}
if (filter != null)
{
query = query.Where(filter);
}
if (orderBy != null)
{
query = orderBy(query);
}
return query.ToList();
}
public virtual void Insert(TEntity entity)
{
DbSet.Add(entity);
}
public virtual void Update(TEntity entity)
{
DbSet.Attach(entity);
Context.Entry(entity).State = EntityState.Modified;
}
public virtual void Delete(object id)
{
var entity = DbSet.Find(id);
Delete(entity);
}
public virtual void Delete(TEntity entity)
{
if (Context.Entry(entity).State == EntityState.Detached)
{
DbSet.Attach(entity);
}
DbSet.Remove(entity);
}
}
EDIT 2:
I ran dotMemory for a variety of scenarios and this is what I got.
The red circles indicate that sometimes there are multiple rises and drops happening on one page visit. The blue circle indicates download of a 40MB file. The green circle indicates download of 140MB file. Furthermore, a lot of the time, the memory usage keeps on increasing for a few more seconds even after the page has instantly loaded.
Because the file is large, it is allocated on the Large Object Heap, which is collected with a gen2 collection (which you see in your profile, the purple blocks is the large object heap, and you see it collected after 10 seconds).
On your production server, you most likely have much more memory than on your local machine. Because there is less memory pressure, the collections won't occur as frequently, which explains why it would add up to a higher number - there are several files on the LOH before it gets collected.
I wouldn't be surprised at all if, across different buffers in MVC and EF, some data gets copied around in unsafe blocks too, which explains the unmanaged memory growth (the thin spike for EF, the wide plateau for MVC)
Finally, a 500MB baseline is for a large project not completely surprising (madness! but true!)
So an answer to your question why it uses so much memory that is quite probable is "because it can", or in other words, because there is no memory pressure to perform a gen2 collection, and the downloaded files sit unused in your large object heap until collection evicts them because memory is abundant on your production server.
This is probably not even a real problem: if there were more memory pressure, there would be more collection, and you'd see lower memory usage.
As for what to do about it, I'm afraid you're out of luck with the Entity Framework. As far as I know it has no streaming API. WebAPI does allow streaming the response by the way, but that won't help you much if you have the whole large object sitting in memory anyway (though it might possibly help some with the unmanaged memory in the (by me) unexplored parts of MVC.
Add a GC.Collect() to the Dispose method for testing purposes. If the leak stays it is a real leak. If it vanishes it was just delayed GC.
You did that and said:
#usr Memory usage now hardly reaches 600MB. So really just delayed?
Clearly, there is no memory leak if GC.Collect removes the memory that you were worried about. If you want to make really sure, run your test 10 times. Memory usage should be stable.
Processing such big files in single chunks can lead to multiplied memory usage as the file travels through the different components and frameworks. It can be a good idea to switch to a streaming approach.
Apparently, that consists of System.Web and all it's children taking up around 200MB. This is quoted as the absolute minimum for your application pool.
Our web application using EF 6, with a model consisting of 220+ entities in .Net 4.0 starts up at around 480MB idle. We perform some AutoMapper operations at startup. Memory consumption peaks and then returns to around 500MB in daily use. We've just accepted this as the norm.
Now, for your file download spikes. The issue under web forms when using an ashx handler or the like was explored in this question: ASP.net memory usage during download
I don't know how that relates to the FileActionResult in MVC, but you can see that the buffer size needed to be controlled manually to minimise the memory spike. Try to apply the principles behind the answer from that question by:
Response.BufferOutput = false;
var stream = new MemoryStream(file);
stream.Position = 0;
return new FileStreamResult(stream, type); // Or just pass the "file" parameter as a stream
After applying this change, what does the memory behaviour look like?
See 'Debugging memory problems (MSDN)' for more details.
You may need to read the data in chunks and write to the output stream.
Take a look at SqlDataReader.GetBytes
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.getbytes(v=vs.110).aspx
This could be one of a few things:
As your file is rather large and is stored in your database and you are getting it via Entity Framework, you are caching this data in a few places. Each EF request caches that data until your context is disposed. When you return the file from the action, the data is then loaded again and then streamed to the client. All of this happens in ASP .NET as explained already.
A solution to this issue to not to stream large files directly from the database with EF and ASP .NET. A better solution is to use a background process to cache large files locally to the website and then have the client download them with a direct URL. This allows IIS to manage the streaming, saves your website a request and saves a lot of memory.
OR (less likely)
Seeing that you are using Visual Studio 2013, this sounds awfully like a Page Inspector issue.
What happens is when you run your website with IIS Express from Visual Studio, Page Inspector caches all of the response data - including that of your file - causing a lot of memory to be used. Try adding:
<appSettings>
<add key="PageInspector:ServerCodeMappingSupport" value="Disabled" />
</appSettings>
to your web.config to disable Page Inspector to see if that helps.
TL;DR
Cache the large file locally and let the client download the file directly. Let IIS handle the hard work for you.
I suggest trying Ionic.Zip library. I use it in one of our sites with a requirement to download multiple files into one unit.
I recently tested it with a group of files while one of the files is as large as 600MB:
Total size of zipped/compressed folder: 260MB
Total Size of unzipped folder: 630MB
Memory usage spiked from 350MB to 650MB during download
Total time: 1m 10s to download, no VPN
I was following the MVC music store tutorial and I came to the part where they are using database connections (DbConnection is a child of DbContext). I was taught to create methods like this (wrapping with using):
public class StoreManagerController : Controller
{
//
// GET: /StoreManager/
public ActionResult Index()
{
using(var db = new DbConnection())
{
var albums = db.Albums.Include(a => a.Genre).Include(a => a.Artist);
return View(albums.ToList());
}
}
...
}
but Visual Studio generated me a controller which looked like this:
public class StoreManagerController : Controller
{
private DbConnection db = new DbConnection();
//
// GET: /StoreManager/
public ActionResult Index()
{
var albums = db.Albums.Include(a => a.Genre).Include(a => a.Artist);
return View(albums.ToList());
}
...
}
I assume, Visual Studio isn't wrong, but why was I told to wrap each method with using to make the connections as short as possible and also the users to use separate connections?
I assume, Visual Studio isn't wrong, but why was I told to wrap each method with using
using(var db = new DbConnection())
{
var albums = db.Albums.Include(a => a.Genre).Include(a => a.Artist);
return View(albums.ToList());
}
The scope of db remains only within the curly braces. This is perhaps another purpose that using keyword serves in C#. It defines the scope of a variable, here in the above case it is your db object.
Now, if you debug the code that visual studio generated for you, then you notice that there is a Dispose method being invoked each and every time an object of a controller class is made or in other words, an action method is called within the corresponding Controller.
The DBContext instance is always disposed because of the following -
As you load more objects and their references into memory, the memory consumption of the context may increase rapidly. This may cause performance issues.
If an exception causes the context to be in an unrecoverable state, the whole application may terminate.
The chances of running into concurrency-related issues increase as the gap between the time when the data is queried and updated grows.
For more info - Reference
This might depend on the usability of your app; whether or not you need a persistent connection, and the cost of creating one (and a myriad of other factors).
But for starters, you should always dispose of the connection (as in the first pattern, not the one suggested by Visual Studio) and then move to other patterns based on new requirements or performance-related issues.
The biggest issue I see with the Visual Studio-suggested options is that you have no way of controlling the lifetime of the DbConncetion object, and are leaving it up to the garbage collector to eventually dispose of it. This could leave the connection resources in use for an rather undetermined period of time.
I've got a simple web application using ASP.NET MVC3 and Ninject.Web.MVC (the MVC3 version).
The whole thing is working fine, except when the application ends. Whenever it ends, the kernel is disposed, as seen in Application_End() in NinjectHttpApplication:
Reflector tells me this:
public void Application_End()
{
lock (this)
{
if (kernel != null)
{
kernel.Dispose();
kernel = null;
}
this.OnApplicationStopped();
}
}
What happens is that my webserver goes down with a StackOverflowException (I tried both IIS7 and the built-in webserver in VS2010). I can only assume this is where it's going wrong, as I haven't written any code myself on application end.
I figured out that the Kernel knows how to resolve IKernel (which returns the Kernel itself), might this be something that could cause the stack overflow? I could imagine something like this happens:
Kernel.Dispose()
Dispose all instances in the kernel
hey! look at this, the kernel is also in the kernel. Return to step 1.
In other words, the kernel gets disposed, disposes all references it holds (which includes a self-reference), which causes it to dispose itself.
Does this make any sense?
Edit:
It seems the problem is in NinjectHttpApplication. Take a look at this activation code:
public void Application_Start()
{
lock (this)
{
kernel = this.CreateKernel();
...
kernel.Bind<IResolutionRoot>().ToConstant(kernel).InSingletonScope();
...
}
}
It seems ok, but what's happening now is that whenever an IResolutionRoot is called, kernel is cached within itself. When disposing the kernel, the cache is emptied which disposes all cached objects, which causes a circular reference.
A simple solution for NinjectHttpApplication would be to simply change the binding. Change the constant binding to a method one:
kernel.Bind<IResolutionRoot>().ToConstant(kernel).InSingletonScope();
becomes
kernel.Bind<IResolutionRoot>().ToMethod(x => this.Kernel);
This solves the problem, but I am not sure if the whole circular dispose caching issue is a bug in ninject.
I encountered the same issue.
I ended up copying the code for NinjectHttpApplication and removing Kernel.Dispose() in the Application_End function.
public void Application_End()
{
lock (this)
{
if (kernel != null)
{
//kernel.Dispose();
kernel = null;
}
this.OnApplicationStopped();
}
}
That should fix the error. Not sure if there is a planned fix for it though.
There was a bug in MVC3. It's fixed in the latest revision and will be part of the RC2 comming next week. In the mean time take the build from the build server http://teamcity.codebetter.com