MVC ASP.NET is using a lot of memory - c#

If I just browse some pages on the app, it sits at around 500MB. Many of these pages access the database but at this point in time, I only have roughly a couple of rows each for 10 tables, mostly storing strings and some small icons that are less than 50KB.
The real problem occurs when when I download a file. The file is roughly 140MB and is stored as a varbinary(MAX) in the database. The memory usage suddenly rises to 1.3GB for a split second and then falls back to 1GB. The code for that action is here:
public ActionResult DownloadIpa(int buildId)
{
var build = _unitOfWork.Repository<Build>().GetById(buildId);
var buildFiles = _unitOfWork.Repository<BuildFiles>().GetById(buildId);
if (buildFiles == null)
{
throw new HttpException(404, "Item not found");
}
var app = _unitOfWork.Repository<App>().GetById(build.AppId);
var fileName = app.Name + ".ipa";
app.Downloads++;
_unitOfWork.Repository<App>().Update(app);
_unitOfWork.Save();
return DownloadFile(buildFiles.Ipa, fileName);
}
private ActionResult DownloadFile(byte[] file, string fileName, string type = "application/octet-stream")
{
if (file == null)
{
throw new HttpException(500, "Empty file");
}
if (fileName.Equals(""))
{
throw new HttpException(500, "No name");
}
return File(file, type, fileName);
}
On my local computer, If I don't do anything, the memory usage stays at 1GB. If I then go back and navigate to some pages, it falls back down to 500MB.
On the deployment server, it stays at 1.6GB after the first download no matter what I do. I can force the memory usage to increase by continually downloading files until it reaches 3GB, where it drops back down to 1.6GB.
In every controller, I have overriden the Dispose() method as so:
protected override void Dispose(bool disposing)
{
_unitOfWork.Dispose();
base.Dispose(disposing);
}
This refers to:
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
public void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
_context.Dispose();
}
}
_disposed = true;
}
So my unit of work should be disposed every time the controller is disposed. I am using Unity and I register the unit of work with a Heirarchical Lifetime Manager.
Here are a few of screenshots from the Profiler:
I believe this could be the problem or I am going down the wrong track. Why would Find() use 300MB?
EDIT:
Repository:
public class Repository<TEntity> : IRepository<TEntity> where TEntity : class
{
internal IDbContext Context;
internal IDbSet<TEntity> DbSet;
public Repository(IDbContext context)
{
Context = context;
DbSet = Context.Set<TEntity>();
}
public virtual IEnumerable<TEntity> GetAll()
{
return DbSet.ToList();
}
public virtual TEntity GetById(object id)
{
return DbSet.Find(id);
}
public TEntity GetSingle(Expression<Func<TEntity, bool>> predicate)
{
return DbSet.Where(predicate).SingleOrDefault();
}
public virtual RepositoryQuery<TEntity> Query()
{
return new RepositoryQuery<TEntity>(this);
}
internal IEnumerable<TEntity> Get(
Expression<Func<TEntity, bool>> filter = null,
Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null,
List<Expression<Func<TEntity, object>>> includeProperties = null)
{
IQueryable<TEntity> query = DbSet;
if (includeProperties != null)
{
includeProperties.ForEach(i => query.Include(i));
}
if (filter != null)
{
query = query.Where(filter);
}
if (orderBy != null)
{
query = orderBy(query);
}
return query.ToList();
}
public virtual void Insert(TEntity entity)
{
DbSet.Add(entity);
}
public virtual void Update(TEntity entity)
{
DbSet.Attach(entity);
Context.Entry(entity).State = EntityState.Modified;
}
public virtual void Delete(object id)
{
var entity = DbSet.Find(id);
Delete(entity);
}
public virtual void Delete(TEntity entity)
{
if (Context.Entry(entity).State == EntityState.Detached)
{
DbSet.Attach(entity);
}
DbSet.Remove(entity);
}
}
EDIT 2:
I ran dotMemory for a variety of scenarios and this is what I got.
The red circles indicate that sometimes there are multiple rises and drops happening on one page visit. The blue circle indicates download of a 40MB file. The green circle indicates download of 140MB file. Furthermore, a lot of the time, the memory usage keeps on increasing for a few more seconds even after the page has instantly loaded.

Because the file is large, it is allocated on the Large Object Heap, which is collected with a gen2 collection (which you see in your profile, the purple blocks is the large object heap, and you see it collected after 10 seconds).
On your production server, you most likely have much more memory than on your local machine. Because there is less memory pressure, the collections won't occur as frequently, which explains why it would add up to a higher number - there are several files on the LOH before it gets collected.
I wouldn't be surprised at all if, across different buffers in MVC and EF, some data gets copied around in unsafe blocks too, which explains the unmanaged memory growth (the thin spike for EF, the wide plateau for MVC)
Finally, a 500MB baseline is for a large project not completely surprising (madness! but true!)
So an answer to your question why it uses so much memory that is quite probable is "because it can", or in other words, because there is no memory pressure to perform a gen2 collection, and the downloaded files sit unused in your large object heap until collection evicts them because memory is abundant on your production server.
This is probably not even a real problem: if there were more memory pressure, there would be more collection, and you'd see lower memory usage.
As for what to do about it, I'm afraid you're out of luck with the Entity Framework. As far as I know it has no streaming API. WebAPI does allow streaming the response by the way, but that won't help you much if you have the whole large object sitting in memory anyway (though it might possibly help some with the unmanaged memory in the (by me) unexplored parts of MVC.

Add a GC.Collect() to the Dispose method for testing purposes. If the leak stays it is a real leak. If it vanishes it was just delayed GC.
You did that and said:
#usr Memory usage now hardly reaches 600MB. So really just delayed?
Clearly, there is no memory leak if GC.Collect removes the memory that you were worried about. If you want to make really sure, run your test 10 times. Memory usage should be stable.
Processing such big files in single chunks can lead to multiplied memory usage as the file travels through the different components and frameworks. It can be a good idea to switch to a streaming approach.

Apparently, that consists of System.Web and all it's children taking up around 200MB. This is quoted as the absolute minimum for your application pool.
Our web application using EF 6, with a model consisting of 220+ entities in .Net 4.0 starts up at around 480MB idle. We perform some AutoMapper operations at startup. Memory consumption peaks and then returns to around 500MB in daily use. We've just accepted this as the norm.
Now, for your file download spikes. The issue under web forms when using an ashx handler or the like was explored in this question: ASP.net memory usage during download
I don't know how that relates to the FileActionResult in MVC, but you can see that the buffer size needed to be controlled manually to minimise the memory spike. Try to apply the principles behind the answer from that question by:
Response.BufferOutput = false;
var stream = new MemoryStream(file);
stream.Position = 0;
return new FileStreamResult(stream, type); // Or just pass the "file" parameter as a stream
After applying this change, what does the memory behaviour look like?
See 'Debugging memory problems (MSDN)' for more details.

You may need to read the data in chunks and write to the output stream.
Take a look at SqlDataReader.GetBytes
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.getbytes(v=vs.110).aspx

This could be one of a few things:
As your file is rather large and is stored in your database and you are getting it via Entity Framework, you are caching this data in a few places. Each EF request caches that data until your context is disposed. When you return the file from the action, the data is then loaded again and then streamed to the client. All of this happens in ASP .NET as explained already.
A solution to this issue to not to stream large files directly from the database with EF and ASP .NET. A better solution is to use a background process to cache large files locally to the website and then have the client download them with a direct URL. This allows IIS to manage the streaming, saves your website a request and saves a lot of memory.
OR (less likely)
Seeing that you are using Visual Studio 2013, this sounds awfully like a Page Inspector issue.
What happens is when you run your website with IIS Express from Visual Studio, Page Inspector caches all of the response data - including that of your file - causing a lot of memory to be used. Try adding:
<appSettings>
<add key="PageInspector:ServerCodeMappingSupport" value="Disabled" />
</appSettings>
to your web.config to disable Page Inspector to see if that helps.
TL;DR
Cache the large file locally and let the client download the file directly. Let IIS handle the hard work for you.

I suggest trying Ionic.Zip library. I use it in one of our sites with a requirement to download multiple files into one unit.
I recently tested it with a group of files while one of the files is as large as 600MB:
Total size of zipped/compressed folder: 260MB
Total Size of unzipped folder: 630MB
Memory usage spiked from 350MB to 650MB during download
Total time: 1m 10s to download, no VPN

Related

Out of memory exception after a certain execution time but C# application private bytes aren't increasing

We've an application which writes records in a SQL Server CE (version 3.5) / (MySQL 5.7.11 community) database table [depending on the configuration] through the use of NHibernate (version 3.1.0.4000).
The method which performs the save to the database table has the following structure, so everything should be disposed correctly
using (ISession session = SessionHelper.GetSession())
using (ITransaction txn = session.BeginTransaction())
{
session.Save(entity);
txn.Commit();
}
After about a week of heavy work (where several hundred thousands records have been written) the application stops working and throws an out of memory error.
Then:
With SQL Server CE the database gets corrupted and needs to be manually repaired
With MySQL the mysqld daemon is terminated and it needs to be restarted
We've been monitoring the application memory usage through ANTS Memory Profiler (with SQL CE configuration), but, to our surprise, the application "private bytes" doesn't seem to increase at all - this is reported both by ANTS and by the RESOURCE MANAGER.
Still, when the application is forced close (after such error shows up) the "physical memory usage" in the task manager falls from about 80% right down to 20-30%, and I'm again able to start other processes without getting another out of memory exception.
Doing some research, I've found this:
What is private bytes, virtual bytes, working set?
I quote the last part about private bytes:
Private Bytes are a reasonable approximation of the amount of memory
your executable is using and can be used to help narrow down a list of
potential candidates for a memory leak; if you see the number growing
and growing constantly and endlessly, you would want to check that
process for a leak. This cannot, however, prove that there is or is
not a leak.
Considering the rest of the linked topic, for what I understand, "private bytes" may or may not contain the memory allocated by linked unmanaged dlls, so:
I configured ANTS to also report information about unmanaged memory (Unmanaged memory breakdown by module section) and I've noticed that one of the 2 following modules (depending on a specific sessionfactory setting) take up more and more space (with a ratio that's compatible with the computer running out of memory in about a week):
sqlceqp35
MSVCR120
Given the current results, I'm planning the following tests:
update nhibernate version
trying to further analyze current nhibernate sessionhelper configuration
creating an empty console application without WPF user interface (yes, this application uses WPF) where I put more and more code until I'm able to reproduce the issue
Any suggestion?
EDIT 30/06/2016:
Here's the session factory initialization:
SESSION FACTORY: (the custom driver is to avoid trucation after 4000 chars)
factory = Fluently.Configure()
.Database(MsSqlCeConfiguration.Standard.ConnectionString(connString)
.ShowSql()
.MaxFetchDepth(3)
.Driver<MySqlServerCeDriver>()) // FIX truncation 4000 chars
.Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly()))
.ProxyFactoryFactory<NHibernate.ByteCode.LinFu.ProxyFactoryFactory>()
.ExposeConfiguration(c =>
{
c.SetProperty("cache.provider_class", "NHibernate.Cache.HashtableCacheProvider");
c.SetProperty("cache.use_query_cache", "true");
c.SetProperty("command_timeout", "120");
})
.BuildSessionFactory();
public class MySqlServerCeDriver : SqlServerCeDriver
{
protected override void InitializeParameter(
IDbDataParameter dbParam,
string name,
SqlType sqlType)
{
base.InitializeParameter(dbParam, name, sqlType);
if (sqlType is StringClobSqlType)
{
var parameter = (SqlCeParameter)dbParam;
parameter.SqlDbType = SqlDbType.NText;
}
}
}
EDIT 07/07/2016
As requested, the GetSession() does the following:
public static ISession GetSession()
{
ISession session = factory.OpenSession();
session.FlushMode = FlushMode.Commit;
return session;
}

SharpDX memory fragmentation

I am working on a .NET 3.5 application which uses SharpDX to render tiled 2D images.
Textures (Texture2D) are loaded into a cache on-demand, and are created in the managed pool.
Textures are disposed of when no longer required, and I have verified that Dispose() is called correctly. SharpDX object tracking indicates that there are no textures being finalized.
The issue is that large amounts of unmanaged heap memory used by the textures continues to be reserved after disposal. This memory is reused when loading a new texture, so memory is not being leaked.
However, another part of the application also requires significant chunks of memory to process new images. Because these heaps are still present, even though the textures have been disposed, there is not enough contiguous memory to load another image (can be hundreds of MB).
If I allocate unmanaged meory using AllocHGlobal, the resulting
heap memory completely vanishes again after calling FreeHGlobal.
VMMap shows the unmanaged heap (red) after heavy usage of the application.
We can see here that unmanaged heap accounts for ~380MB, even though only ~20MB is actually committed at this point.
Long term, the application is being ported to 64-bit. However, this is not trivial due to unmanaged dependencies. Also, not all users are on 64-bit machines.
EDIT: I've put together a demonstration of the issue - create a WinForms application and install SharpDX 2.6.3 via Nuget.
Form1.cs:
using System.Collections.Generic;
using System.Diagnostics;
using System.Windows.Forms;
using SharpDX.Direct3D9;
namespace SharpDXRepro {
public partial class Form1 : Form {
private readonly SharpDXRenderer renderer;
private readonly List<Texture> textures = new List<Texture>();
public Form1() {
InitializeComponent();
renderer = new SharpDXRenderer(this);
Debugger.Break(); // Check VMMap here
LoadTextures();
Debugger.Break(); // Check VMMap here
DisposeAllTextures();
Debugger.Break(); // Check VMMap here
renderer.Dispose();
Debugger.Break(); // Check VMMap here
}
private void LoadTextures() {
for (int i = 0; i < 1000; i++) {
textures.Add(renderer.LoadTextureFromFile(#"D:\Image256x256.jpg"));
}
}
private void DisposeAllTextures() {
foreach (var texture in textures.ToArray()) {
texture.Dispose();
textures.Remove(texture);
}
}
}
}
SharpDXRenderer.cs:
using System;
using System.Linq;
using System.Windows.Forms;
using SharpDX.Direct3D9;
namespace SharpDXRepro {
public class SharpDXRenderer : IDisposable {
private readonly Control parentControl;
private Direct3D direct3d;
private Device device;
private DeviceType deviceType = DeviceType.Hardware;
private PresentParameters presentParameters;
private CreateFlags createFlags = CreateFlags.HardwareVertexProcessing | CreateFlags.Multithreaded;
public SharpDXRenderer(Control parentControl) {
this.parentControl = parentControl;
InitialiseDevice();
}
public void InitialiseDevice() {
direct3d = new Direct3D();
AdapterInformation defaultAdapter = direct3d.Adapters.First();
presentParameters = new PresentParameters {
Windowed = true,
EnableAutoDepthStencil = true,
AutoDepthStencilFormat = Format.D16,
SwapEffect = SwapEffect.Discard,
PresentationInterval = PresentInterval.One,
BackBufferWidth = parentControl.ClientSize.Width,
BackBufferHeight = parentControl.ClientSize.Height,
BackBufferCount = 1,
BackBufferFormat = defaultAdapter.CurrentDisplayMode.Format,
};
device = new Device(direct3d, direct3d.Adapters[0].Adapter, deviceType,
parentControl.Handle, createFlags, presentParameters);
}
public Texture LoadTextureFromFile(string filename) {
using (var stream = new FileStream(filename, FileMode.Open, FileAccess.Read)) {
return Texture.FromStream(device, stream, 0, 0, 1, Usage.None, Format.Unknown, Pool.Managed, Filter.Point, Filter.None, 0);
}
}
public void Dispose() {
if (device != null) {
device.Dispose();
device = null;
}
if (direct3d != null) {
direct3d.Dispose();
direct3d = null;
}
}
}
}
My question therefore is - (how) can I reclaim the memory consumed by these unmanaged heaps after the textures have been disposed?
It seems the memory that's giving the issues is allocated by the nVidia driver. As far as I can tell, all the deallocation methods are properly called, so this might be a bug in the drivers. Looking around the internet shows some issues that seem related to this, though it's nothing serious enough to be worth referencing. I can't test this on an ATi card (I haven't seen one in like ten years :D).
So it seems like your options are:
Make sure your textures are big enough to never be allocated on the "shared" heaps. This allows the memory leak to proceed much slower - although it's still unreleased memory, it's not going to cause memory fragmentation anywhere near as serious as you're experiencing. You're talking about drawing tiles - this has historically been done with tilesets, which give you a lot better handling (though they also have drawbacks). In my tests, simply avoiding tiny textures all but eliminated the problem - it's hard to tell if it's just hidden or completely gone (both are quite possible).
Handle your processing in a separate process. Your main application would launch the other process whenever needed, and the memory will be properly reclaimed when the helper process exits. Of course, this only makes sense if you're writing some processing application - if you're making something that actually displays the textures, this isn't going to help (or at least it's going to be really tricky to setup).
Do not dispose of the textures. Managed texture pool handles paging the texture to and from the device for you, and it even allows you to use priorities etc., as well as flushing the whole on-device (managed) memory. This means that the textures will remain in your process memory, but it seems that you'll still get better memory usage than with your current approach :)
Possibly, the problems might be related to e.g. DirectX 9 contexts only. You might want to test with one of the newer interfaces, like DX10 or DXGI. This doesn't necessarily limit you to DX10+ GPUs - but you will lose support for Windows XP (which isn't supported anymore anyway).

High memory usage by ASP.NET applications

We have an issue with some of our ASP.Net applications. Some of our apps claim a large amount of memory from start as their working set.
On our 2 webfarm-servers (4gb of RAM each) run multiple applications. We have a stable environment with about 1.2gb of memory free.
Then we add an MVC5 + WebApi v2 + Entity Framework app that instantly claims 1+gb as working set memory, while only actually using about 300mb. This causes other applications to complain that there is not enough memory left.
We already tried setting limit for virtual memory and the limit for private memory, without any avail. If we set this to about 500mb, the app still uses more or less the same amount of memory (way over 500) and does not seem to respect the limits put in place.
For reference, I tested this with an empty MVC5 project (VS2013 template) and this already claims 300mb of memory, while only using about 10mb.
Setting the app as a 32-bit app seems to have some impact in reducing the size of the working set.
Is there any way to reduce the size of the working set, or to enforce a hard limit on the size of it?
Edit:
In the case of the huge memory use for the project using Web Api v2 and Entity Framework, my API controllers look like this:
namespace Foo.Api
{
public class BarController : ApiController
{
private FooContext db = new FooContext();
public IQueryable<Bar> GetBar(string bla)
{
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year);
}
}
as they look in most tutorials I could find (including the ones from microsoft). Using using here does not work because of LINQ deferred loading. It could work if I added a ToList (not tested) everywhere, but does this have any other impact?
edit2:
It works if I do
namespace Foo.Api
{
public class BarController : ApiController
{
public List<Bar> GetBar(string bla)
{
using(FooContext db = new FooContext){
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year).ToList();
}
}
}
Does the ToList() have any implications on the performance of the api? (I know I can not continue querying cheaply as with an IQueryable)
Edit3:
I notice that its the private working set of the app that is quite high. Is there a way to limit this? (Without causing constant recycles)
Edit4:
As far as I know I have a Dispose on each and every APIController. My front-end is just some simple MVC controllers but for the large part .cshtml and javascript (angular) files.
We have another app, just regular mvc, with two models and some simple views (no database, or other external resources that could be leaked) and this also consumes up to 4-500mb of memory. If I profile it, I can't see anything that indicates memory leaks, I do see that only 10 or 20 mb is actually used, the rest is unmanaged memory that is unassigned (but part of the private memory working set, so claimed by this app and unusable by any other).
I had a similar problem with some of my applications. I was able to solve the problem by properly closing the disposable database resources by wrapping them in using clauses.
For Entity Framework, that would mean to ensure you always close your context after each request. Connections should be disposed between requests.
using (var db = new MyEFContext())
{
// Execute queries here
var query = from u as db.User
where u.UserId = 1234
select u.Name;
// Execute the query.
return query.ToList();
// This bracket will dispose the context properly.
}
You may need to wrap the context into a service that request-caches your context in order to keep it alive throughout the request, and disposes of it when complete.
Or, if using the pattern of having a single context for the entire controller like in the MSDN examples, make sure you override the Dispose(bool) method, like the example here.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
So your controller (from above) should look like this:
namespace Foo.Api
{
public class BarController : ApiController
{
private FooContext db = new FooContext();
public IQueryable<Bar> GetBar(string bla)
{
return db.Bar.Where(f => f.Category.Equals(bla)).OrderBy(f => f.Year);
}
// WebApi 2 will call this automatically after each
// request. You need this to ensure your context is disposed
// and the memory it is using is freed when your app does garbage
// collection.
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
}
}
The behavior I saw was that the application would consume a lot of memory, but it could garbage collect enough memory to keep it from ever getting an OutOfMemoryException. This made it difficult to find the problem, but disposing the database resources solved it. One of the applications used to hover at around 600 MB of RAM usage, and now it hovers around 75 MB.
But this advice doesn't just apply to database connections. Any class that implements IDisposable should be suspect if you are running into memory leaks. But since you mentioned you are using EntityFramework, it is the most likely suspect.
Removing all Telerik Kendo MVC references (dll and such) fixed our problems. If we run the application without, all our memory problems are gone and we see normal memory use.
Basically: it was an external library causing high memory use.

Loading many large photos into a Panel efficiently

How do I load many large photos from a directory and its sub-directories in such a way as to prevent an OutOfMemoryException?
I have been using:
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}
which has worked until now. The photos that I need to work with now are anywhere between 15 and 40MB's each, and there could be hundreds of them.
You're attacking the garbage collector with this approach. Loading 15-40mb objects in a loop will always invite an OutOfMemoryException. This is because the objects go straight onto the large object heap, all objects > 85K do. Large objects become Gen 2 objects immediately and the memory is not automatically compacted as of .Net 4.5.1 (you request it) and will not be compacted at all in earlier versions.
Therefore even if you get away with initially loading the objects and the app keeps running, there is every chance that these objects, even when dereferenced completely, will hang around, fragmenting the large object heap. Once fragmentation occurs and for example the user closes the control to do something else for a minute or two and opens the control again, it is much more likely all the new objects will not be able to slot in to the LOH - the memory must be contiguous when allocation occurs. The GC runs collections on Gen 2 and LOH much less often for performance reasons - memcpy is used by the GC in the background and this is expensive on larger blocks of memory.
Also, the memory consumed will not be released if you have all of these images referenced from a control that is in use as well, imagine tabs. The whole idea of doing this is misconceived. Use thumbnails or load full scale images as needed by the user and be careful with the memory consumed.
UPDATE
Rather than telling you what you should and should not do I have decided to try to help you do it :)
I wrote a small program that operates on a directory containing 440 jpeg files with a total size of 335 megabytes. When I first ran your code I got the OutOfMemoryException and the form remained unresponsive.
Step 1
The first thing to note is if you are compiling as x86 or AnyCpu you need to change this to x64. Right click project, go to Build tab and set the target platform to x64.
This is because the amount of memory that can be addressed on a 32 bit x86 platform is limited. All .Net processes run within a virtual address space and the CLR heap size will be whatever the process is allowed by the OS and is not really within the control of the developer. However, it will allocate as much memory as is available - I am running on 64 bit Windows 8.1 so changing the target platform gives me an almost unlimited amount of memory space to use - right up to the limit of physical memory your process will be allowed.
After doing this running your code did not cause an OutOfMemoryException
Step 2
I changed the target framework to 4.5.1 from the default 4.5 in VS 2013. I did this so I could use GCSettings.LargeObjectHeapCompactionMode, as it is only available in 4.5.1 . I noticed that closing the form took an age because the GC was doing a crazy amount of work releasing memory. Basically I would set this at the end of the loadPics code as it will allow the large object heap to not get fragmented on the next blocking garbage collection. This will be essential for your app I believe so if possible try to use this version of the framework. You should test it on earlier versions too to see the difference when interacting with your app.
Step 3
As the app was still unresponsive I made the code run asynchronously
Step 4
As the code now runs on a separate thread to the UI thread it caused a GUI cross thread exception when accessing the form, so I had to use Invoke which posts a message back to the UI thread from the code's thread. This is because UI controls can only be accessed from a UI thread.
Code
private async void button1_Click(object sender, EventArgs e)
{
await LoadAllPics();
}
private async Task LoadAllPics()
{
IEnumerable<string> files = Directory.EnumerateFiles(#"C:\Dropbox\Photos", "*.JPG", SearchOption.AllDirectories);
await Task.Run(() =>
{
foreach(string file in files)
{
Invoke((MethodInvoker)(() =>
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}));
}
}
);
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
}
You can try resizing the image when you are putting on the UI.
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file).resizeImage(50,50) };
this.Controls.Add(pic);
}
public static Image resizeImage(this Image imgToResize, Size size)
{
return (Image)(new Bitmap(imgToResize, size));
}

Correct way to use databases in Windows 8 and Windows Phone 8

I am currently working on a Windows 8 app which needs to store some tables. Currently, I am using XML files with XDocument classes to solve the purpose. It employs save and load methods using GetFileAsync and CreateFileAsync etc. Moreover, there save and load methods are called by different events. However, whenever there are repeated calls, an exception is thrown telling me that file access is denied. Expected behavior - more details here! While there are dirty methods to avoid this (like using locks and such) I am not very happy with the results. I'd rather prefer databases. Moreover, I am planning to write another app for Windows Phone 8 (and possibly a web version) which will make use of the data.
They have been repeatedly saying that Windows 8 is cloud based. Now the question: What is correct way to store my data? XML seems right but is has problems I mentioned above. What would be ideal cloud based solution involving Windows 8, Windows Phone 8 and possibly Azure? All I want is to store tables and make those accessible.
Sorry if the question seems unclear. I will provide information if required.
If you want to use Azure, the easiest way to proceed is Windows Azure Mobile services. It allows you to setup your database and webservices using a web interface in a few minutes.
It's quite cool, allows you to add custom javascript to your web api logic, and generates json web apis. There are client Libraries for Windows 8, Windows Phone and iOS. You could easily roll your own for any http enabled frontends.
However be aware that taking the cloud route means that your app won't work offline, (if you don't code a cache system that is. And a cache will requires a local DB.)
About the local DB
You really have to possibilities:
1) A real DB in your app, like SQLite. It's available as a Nuget package but right now ARM support isn't available out of the box, nor guaranteed by the team. If you don't need arm, Go try it :)
2) plain old file storage, like you did before. I personally often do that myself. You will however get issues when accessing it from different threads (Access Denied errors).
When you store things in a local file, don't forget to lock the critical sections (ie when you read or write to the file) to prevent the access denied exceptions. To be sure, Incapsulate your write/read logic in a service class instance unique within your app. (Use the singleton pattern for instance, or anything equivalent).
The lock itself, now. I imagine that you are using async await. I like this sweet thing too. But classic C# locks (using the lock keyword for instance) don't work with async await. (And even if it worked, blocking wouldn't be cool).
That's why the marvellous AsyncLock comes into play. It's a lock, but which -approximately- doesn't block (you await it).
public class AsyncLock
{
private readonly AsyncSemaphore m_semaphore;
private readonly Task<Releaser> m_releaser;
public AsyncLock()
{
m_semaphore = new AsyncSemaphore(1);
m_releaser = Task.FromResult(new Releaser(this));
}
public Task<Releaser> LockAsync()
{
var wait = m_semaphore.WaitAsync();
return wait.IsCompleted ?
m_releaser :
wait.ContinueWith((_, state) => new Releaser((AsyncLock)state),
this, CancellationToken.None,
TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default);
}
public struct Releaser : IDisposable
{
private readonly AsyncLock m_toRelease;
internal Releaser(AsyncLock toRelease) { m_toRelease = toRelease; }
public void Dispose()
{
if (m_toRelease != null)
m_toRelease.m_semaphore.Release();
}
}
}
public class AsyncSemaphore
{
private readonly static Task s_completed = Task.FromResult(true);
private readonly Queue<TaskCompletionSource<bool>> m_waiters = new Queue<TaskCompletionSource<bool>>();
private int m_currentCount;
public AsyncSemaphore(int initialCount)
{
if (initialCount < 0) throw new ArgumentOutOfRangeException("initialCount");
m_currentCount = initialCount;
}
public Task WaitAsync()
{
lock (m_waiters)
{
if (m_currentCount > 0)
{
--m_currentCount;
return s_completed;
}
else
{
var waiter = new TaskCompletionSource<bool>();
m_waiters.Enqueue(waiter);
return waiter.Task;
}
}
}
public void Release()
{
TaskCompletionSource<bool> toRelease = null;
lock (m_waiters)
{
if (m_waiters.Count > 0)
toRelease = m_waiters.Dequeue();
else
++m_currentCount;
}
if (toRelease != null)
toRelease.SetResult(true);
}
}
you can use it this way (I suppose that you have an AsyncLock field named blogLock (taken from one of my own projects):
using (await blogLock.LockAsync())
{
using (var stream = await folder.OpenStreamForReadAsync(_blogFileName))
{
using (var reader = new StreamReader(stream))
{
var json = await reader.ReadToEndAsync();
var blog = await JsonConvert.DeserializeObjectAsync<Blog>(json);
return blog;
}
}
}
I've stumbled across this thread because I have basically the exact same problem. What seems staggering to me is that Microsoft makes its own enterprise-class database product (SQL Server), which already has a couple of lightweight, embeddable versions, and yet these seemingly can't be used with Windows 8/Windows Phone 8 applications to provide a local database. And yet MySQL can!
I've tried a couple of times to dabble in writing Windows Phone 8 apps, using my ASP.NET/VB/NET/SQL experience, but I always get bogged down in trying to learn a different way to perform data operations that I can do in my sleep in a web environment and lose interest. Why can't they make it easy to use SQL with W8/WP8 apps?
If the data pertains to the user of the device look at using SQLlite ... there is a question on Stack about SQLlite and local winRT Databases here: Local database storage for WinRT/Metro applications
SQL Databases
IndexedDB incase of the Windows 8 and JavaScript development
I know this is an old question that already has an accepted answer, but I'm going to get out my soapbox and answer it anyway because I think that rather than solve the technical problem it is better to use an architecture that doesn't depend on local database facilities.
In my experience very little data requires device local database services.
Most user generated data requiring local storage is non-roaming (ie device specific) user preferences and configuration (eg use removable storage setting). Game results fall into this category. Apps that produce larger quantities of user data are typically implemented on the desktop and almost certainly have a fast reliable connection to the local network, making server-based storage eminently suitable even for "fat" data like Office documents.
Reference data should certainly be server based, but you might choose to cache it. Nokia Maps on Windows Phone 8 is an excellent example of cached server-based data. The cache can even be explicitly pre-loaded in anticipation of off-line use.
The world view I have just expounded has little use for a local SQL Server. If you want a query engine, use LINQ. Express your application settings and user data as an object graph and (de)serialise XML. You could even use Linq2Xml directly on the XML if you don't want to maintain ORM classes.
Data of any sort that ought to be available across all the user's devices really needs to be cloud stored anyway.
To address some of akshay's comments,
Map data
Geospatial data is typically organised into structures known as quad-trees for a variety of reasons broadly to do with providing a level of detail that varies with zoom. The way these are accessed and manipulated derives considerable advantage from their representation as object graphs, and they are not updated by the users, so while this data certainly could be stored in a relational database and it probably is while it's being compiled, it certainly isn't stored or delivered that way.
LINQ is well adapted to this scenario because it can be applied directly to the quad-tree.
The data certainly is in a file. But I imagine you meant direct file access rather than indirection through another process. Probably the thought in your mind is that it is a good idea to invest significant effort on thoroughly solving the problems of concurrency and query processing once and share the solution between client apps. But this is a very heavyweight solution, and the query processing aspect is already well handled by LINQ (which is why I keep mentioning it).
Your XML problems
Read-only doesn't need to lock, so avoid the file system locking problem by caching and using Singleton pattern...
public static class XManager
{
static Dictionary<string, XDocument> __cache = new Dictionary<string, XDocument>();
public static XDocument GetXDoc(string filepath)
{
if (!__cache.Contains(filepath)
{
__cache[filepath] = new XDocument();
__cache[filepath].Load(filepath);
}
return _cache[filepath];
}
}

Categories