I was wondering if it was wise to cache the Entity Framework's ObjectContext object in the Cache; will this give me issues with multiple connections at the same time that the user will experience issues with that?
I've gotten errors like: 'connection is currently closed' and wondered if that was due to multiple users and caching the ObjectContext, or if it was related to say hitting refresh multiple times or stopping the page and quickly doing something else (something that we did do to get the error).
I agree with above. However, I do cache the object context in HttpContext.Current.Items collection without any issues. Also a good read:
http://dotnetslackers.com/articles/ado_net/managing-entity-framework-objectcontext-lifespan-and-scope-in-n-layered-asp-net-applications.aspx
I wouldn't advise it. The ObjectContext needs to be active to observe changes to any entities you are actively working with or you'd need to disconnect any active entities prior to caching the ObjectContext.
If you have no active entities, then there's no real need to cache an ObjectContext. In EFv1 working with disconnected entities was problematic at best, so I'd either not cache or wait for the Entity Framework v4 which allows for more manageable entities (self tracking entities, POCO entities etc).
Just thought I'd add one last point - multiple threads - could be problematic as well. Applying Changes will attempt to commit all changes tracked by the ObjectContext. If multiple users are sharing a single Context... well, hopefully you can see the problems..
Related
I recently exposed to Entity Framework 6 caching mechanism.
As we might figure from this article, it does it in First-Lever manner.
Our system uses EF 6 (code first) along with MemoryCache to improve performance.
The main reason we use MemoryCache is because we need to execute an intense query on every page request. We execute this query x3 times (in the worst case) on every page request since there are client call backs.
I wonder if we still need to use the MemoryCache mechanism if the EF 6 already use one.
It is worth saying that we don't use any special caching feature or cache dependencies. Just a simple MemoryCache with timeouts.
The fact that EF caches entities in context is in no way a replacement for "real" cache, for various reasons:
You should not reuse EF context for more that one logical operation, because EF context represents unit of work, and so should be used according to this pattern. Also, even if you for some reason reuse context in multiple operations - you absolutely cannot do that in multi-threaded environment, like web server application.
It does not prevent you from making multiple queries for the same data to your database, for example:
var entity1 = ctx.Entities.Where(c => c.Id == 1).First();
var entity2 = ctx.Entities.Where(c => c.Id == 1).First();
This will still execute two queries to your database, despite the fact that query is the same and returns the same entity. So nothing is really "cached" in usual sense here. Note however, that both queries will return the same entity, even if database row has been changed between two queries. That is what is meant by EF context "caching". It will execute database query two times, but second time, while evaluating the result, it will notice that there is already entity with the same key attached to the context. So it will return this existing ("cached") entity instead, and will ignore new values (if any) returned by the second query. That behaviour is additional reason to not reuse the context between multiple operations (though you should not do it anyway).
So if you want to reduce load on your database - you have to use second-level caching using whatever suits your needs (from simple InMemoryCache to caching EF provider to distributed memcached instance).
EF only implements what is called first level cache for entities, It stores the entities which have been retrieved during the life time of a context so when you ask for that entity the second time it returns the entity from context. What you need is a second level cache but EF dosen't implants this features. NCache for example implements a wonderful caching architecture and a out of the box a second level cache provider for EF. Not in its open source version.
I'm implemented the unit of work like this tutorial explained:
http://www.codeproject.com/Articles/543810/Dependency-Injection-and-Unit-Of-Work-using-Castle
Though now I encounter a strange problem.
I load within a unit of work (in the transaction) an entity from the database
I update a property of that entity
I call not the save method on my repository
The transaction is committed
In this scenario, I would expect that the updated property is not persisted to the database. But it is. So an entity loaded in my session is tracked and committed to the database without calling save. What is causing this? And is there a way to tell Nhibernate not to update those entities if the save is not called?
I realize I can work around this to update only a property when I need to update. The only risk is by accident updating the property by mistake and it is then very hard to find this problem. (and for example someone new, not knowing this could easily make a mistake)
The explanation requires understanding the difference between a transient and a persistent entity. A transient entity is a new entity and it is made persistent by calling Save(). An entity that has been retrieved using NHibernate is already persistent and any changes made to it will be automatically saved when the session is flushed. NHibernate's goal is to make the database consistent with the domain model when the session ends.
See chapter 9 in the documentation.
So we have this web service that uses an homemade data access framework and I found that that in its current state, the web service cannot run more than one instance at a time because this framework will start stepping on its own feet and whine about connections being closed/already opened and error like that.
So I implemented an SQL lock/mutex that queues all requests and since then, it's been pretty smooth.
I recently worked for another project which uses the ADO Entity Framework (which I've never played with until then) and found out it pretty much does what this homemade framework does.
My question is, is the ADO Entity Framework robust enough on its own so I would not need this SQL mutex implementation anymore ?
Thanks.
If you will follow the rule "Do not share ObjectContext (DbContext in code first) instances between threads", everything will be ok.
Entity framework uses some static data to improve performance (entity model cache), but most of objects (entity connections, contexts, change trackers, etc.) are not thread-safe and shouldn't be shared between threads.
Yes, it is robust enough to do that, given you don't share dbcontexts between threads, which your homebrew layer must be doing. Not a way I'd have gone.
I've been reading about self-tracking entities in .net and how they can be generated from a *.edmx file. The thing that I'm struggling to understand is what generating these entities gives you over the basic EF entities? Also, some people have mentioned self tracking entities and Silverlight, but why would you use these rather than the client side or shared classes generated by RIA services?
What is the point of self-tracking entities and why would you use them?
Self tracking entities (STE) are implementation of change set (previous .NET implementation of change set is DataSet). The difference between STE and other entity types (POCO, EntityObject) is that common entity types can track changes only when connected to living ObjectContext. Once common entity is detached it loose any change tracking ability. This is exactly what STE solves. STE is able to track changes even if you detach it from ObjectContext.
The common usage of STE is in disconnected scenarios like .NET to .NET communication over web services. First request to web service will create and return STE (entity is detached when serialized and ObjectContext lives only to serve single call). Client will make changes in STE and pass it back in another web service call. Service will be able to process changes because it will have STE internal change tracking available.
Handling this scenario without change tracking is possible but it is much more complex especially when you work with whole object graph instead of single entity - you must manually merge changes received from client to current state in database.
Be aware that STEs are not for interoperable solutions because their functionality is based on sharing STE code between server and client.
The main purpose is to aid in N-tier development. Since they're self-tracking, you can serialize them over, say, a WCF service, then de-serialize them back, and they will still know which changes have been made, and are pending for the database.
Self-tracking entities know how to do
their own change tracking regardless
of which tier those changes are made
on. As an architecture, self-tracking
entities falls between DTOs and
DataSets and includes some of the
benefits of each.
http://blogs.msdn.com/b/efdesign/archive/2009/03/24/self-tracking-entities-in-the-entity-framework.aspx
With linq, do you create a single dbContext per request like nHibernate requires (for performance reasons, creating sessions in nhibernate from what I understand are an expensive call).
i.e. in my asp.net-mvc application, I may for a given action, hit the database 5-10 times on seperate calls. Do I need to create a context and re-use it for the entire request?
DataContexts are intended to be used for a single set of actions interacting with your database. I know, that's vague. Their usage is situational. If you are doing related, or specifically sequential activities, then one DataContext is probably good for you. If you are doing unrelated or parallel activities, consider using a DataContext for each activity.
Consider a few guidelines:
Entities retrieved by one DataContext can only be used (read: updated, deleted, etc.) by that same DataContext. If you need to match up objects across separate DataContexts, you'll have to do something such as running a LINQ query to select objects with the same primary key.
LINQ to SQL uses optimistic concurrency.
Dispose of the DataContext when you are done with it (letting it go out of scope and be garbage collected is fine)
Do not use a static or shared DataContext.
When I did a small app using LinqToSql, I found the app was very sluggish when i did a create-use-dispose of a DatabaseContext object each time I had to hit the database.
When I moved to sharing the DBContext across multiple requests... the app suddenly came back to life w.r.t. responsiveness.
Here's a question that I posted which is relevant