I'm working on an NHibernate project, and where I had trouble loading collections earlier (http://stackoverflow.com/questions/4213506/c-hibernate-criteria-loading-collection), I now have problems using data.
I'm using C# in combination with the NHibernate and Spring.Net framework, and I get an LazyInitializationException after I load for instance an 'ordercredit', and then accessing an object of the ordercredit.
I use this code for getting the OrderCredit:
OrderCredit oc = CreditService.getOrderCredit(ordercredit.Id);
The code I use for loading is done using a DAO implementation:
[Transaction(TransactionPropagation.Required, ReadOnly = true)]
public OrderCredit GetOrderCredit(long ordercreditid)
{
var creditrules = Session.CreateCriteria(typeof(OrderCredit));
creditrules.Add(Restrictions.Eq("Id", ordercreditid));
return creditrules.List<OrderCredit>()[0];
}
When I run this on my local machine, everything works fine, and I actually intended to load a list of those 'ordercredits', but that went wrong as well, so I tried a simpler step first.
The objects within the 'OrderCredit' are defined as [OneToMany].
When I put this on the testserver, and try to access the 'OrderObject' object of the loaded OrderCredit, I get the error:
NHibernate.LazyInitializationException: Initializing[.OrderObject#5496522]-Could not initialize proxy - no Session.
Code that fails:
Log.Debug(oc.OrderObject.Name);
Code that works:
Log.Debug(oc.Id);
This happens for any object that's part of the OrderCredit, but I am able to access the property fields of the OrderCredit (for instance the OrderCredit.Id).
Also, when I access any of the objects BEFORE I return the data to the original function calling the method, then it does cache the information or so, as I can access it then.
I've read a lot about this problem, like turning off Lazy, but that did not work for me either (or I did that on the wrong place).
The thing that frustrates me most, is the fact that it actually does work on my local machine, and not on the testserver. What could I be doing wrong?
Any help is highly appreciated.
1st update:
I am using now a GenericDao, using the default method of loading 1 ordercredit. I use the following code to load 1 ordercredit by Id.
OrderCredit oc = GenericService.Load<OrderCredit>(Id);
The code that's inside the GenericDAO is the following, BUT it does not end or breaks the session, which means I am able to access the objects attached to the ordercredit:
[Transaction(TransactionPropagation.Supports, ReadOnly = true)]
public T Load<T>(long id) where T : ISaveableObject
{
var obj = Session.Load<T>(id);
return obj;
}
This is nearly the same code as I had in the function which I included earlier in this question.
I'm now really confused because I don't know what it could be that ends the session. I will work with it now as it works, but I want to change it later on, so I can use my function to call the entire collection and access them via a for each loop.
Currently, I use my 'getOrderCredits' function to get the list of OrderCredit objects, and in the foreach, I get the Id, and use the GenericDao.Load to get the actual item, and can access the objects and such. Of course this is not the way it should be and needs to be.
I'd be amazed if I get this solved.
This is a common problem people have when using NHibernate. It happens because:
You open a session
You load an entity from the database which references another entity
You close the session
You try to access a property on your referenced entity
NHibernate tries to lazily load the entity from the database using the same session that loaded the parent entity
The session is closed, so NHibernate throws exceptions like woah.
You have a few options here:
Keep your session open longer, preferably using something like the unit of work pattern, which will give you tighter control.
Eagerly load your referenced entities when you query:
In your case as spring is managing your transaction for you the 2nd option is probably the quickest/easiest solution.
var creditrules = Session.CreateCriteria(typeof(OrderCredit));
creditrules.Add(Restrictions.Eq("Id", ordercreditid))
.SetFetchMode("OrderObject", FetchMode.Eager);
This will load the OrderObject when you load the OrderCredit.
Related
I'm getting a weird issue with nHibernate... I'm getting this exception:
Unable to resolve property: _Portal
when I try to commit an object graph. The strange thing is that when I search through the entire solution, I don't seem to have this particular property ANYWHERE within the project?!
Has anyone run into this particular case, and if so, what did they do to resolve?
I've ran into the same issue after upgrading nHibernate to 3.3 (from 3.1), as well as associated libraries (including FluentNhibernate). I have a parent object with a child collection, and when modifying the child collection, it would throw the same exception you received (with the nonexistant "_Namespace" property name, where "Namespace" was the first section of my actual namespace).
In our case, switching to SaveOrUpdate() is not an option, as we actually have a version of this object loaded in session as well, so we need Merge().
I don't know what other similarities there might be. For us it's a parent object with a child collection, using FluentNhibernate. Mapping on the parent object is Cascade.AllDeleteOrphan() for the child, and for the child to the parent, Cascade.None().
Unfortunately I can't find any other reports of this bug, so the solution for us was to just revert back to nHibernate 3.1 (and the associated binaries, like FluentNhibernate and Iesi.Collections). That's the only change, and then it works fine again.
Update on bug logged in JIRA [3234].
There is a bug logged for this in JIRA. The issue has not received any priority yet. Perhaps if you are experiencing this issue you can create an account and vote for the bug to be fixed.
https://nhibernate.jira.com/browse/NH-3234
Update on workaround posted for bug JIRA [3234].
As per Ondrej's comment on the bug, overriding the default merge listener on the session configuration with this code solves the issue for now. I am sure with the workaround posted it will be fixed officially soon.
public class UniDirectionalMergeFixListener : DefaultMergeEventListener
{
protected override IDictionary GetMergeMap(object anything)
{
var cache = (EventCache)anything;
var result = IdentityMap.Instantiate(cache.Count);
foreach (DictionaryEntry entry in cache)
result[entry.Value] = entry.Key;
return result;
}
}
So I solved my issue, but I'm not sure why this was the resolution.
In my project, I've abstracted out the use of nHibernate to be in its own project (*.Objects.nHibernate is the namespace). I did this because the client I work with doesnt' typically like using nHibernate, and I'm trying to get them onboard with using it.
What was happening is that this project has a few data models that are append only in the system... e.g., we never do an update. So, my "Repository" has to take that into account.
In my Commit() function within the repository, I serialize the object graph and then deserialize it to make a copy of the object for saving. What I was doing was saying to the session "_Session.Merge(...)", when I needed to say "_Session.SaveOrUpdate(...)" to get things to commit to the database properly... unsure why that made a difference, but that was the answer to the past two days.
Thx. for your help Rippo & Nickolay!
The workaround for this issue is to derive from DefaultMergeEventListener and override the following method like so:
protected override IDictionary GetMergeMap(object anything)
{
var cache = (EventCache) anything;
var result = IdentityMap.Instantiate(cache.Count);
foreach (DictionaryEntry entry in cache)
{
result[entry.Value] = entry.Key;
}
return result;
}
Then simply use this custom event listener when you construct your SessionFactory. I have posted additional details to the related NHibernate bug report: NH-3234
Few things to check:-
Do you have a backing field called _Portal on your domain?
Also does the WORD portal exist anywhere within your solution?
Do a clean solution and see what DLL's are left in any of your BIN folders.
Is your NHibernate configuration being serialized after it has been built? If so check you are using the latest version.
HTH
One more idea. NHibernate allow you to specify in mapping how to access your backing field or property. For example <property access="nosetter.pascalcase-underscore" name="Timestamp" /> will make NHibernate to set value through field _Timestamp. Do you have such access specifiers in your mapping?
I have a web service that is quite heavy on database access. It works fine in test, but as soon as I put it in production and ramp up the load it starts churning out errors that are raised when something calls a method in the DataContext. The error is normally one of these:
Object reference not set to an instance of an object
Cannot access a disposed object. Object name: 'DataContext accessed after Dispose.'.
but not always.
Any single web service requests can result as many as 10 or 15 database queries, and 1 or 2 updates.
I've designed my application with a data access layer which is a bunch of objects that represent the tables in my database which hold all the business logic. The is a separate project to my web service as it's shared with a Web GUI.
The data access objects derive from a base class which has a GetDataContext() method to initiate an instance of the data context whenever it's needed.
All throughout my data access objects I've written this:
using (db = GetDataContext())
{
// do some stuff
}
which happily creates/uses/disposes my DataContext (created by sqlmetal.exe) object for each and every database interaction.
After many hours of head scratching, I think I've decided that the cause of my errors is that under load the datacontext object is being created and disposed way too much, and I need to change things to share the same datacontext for the duration of the web service request.
I found this article on the internet which has a DataContextFactory that seems to do exactly what I need.
However, now that I've implemented this, and the DataContext is saved as an item in the HttpContext, I get...
Cannot access a disposed object.
Object name: 'DataContext accessed after Dispose.'
...whenever my datacontext is used more than once. This is because my using (...) {} code is disposing my datacontext after its first use.
So, my question is... before I go through my entire data access layer and remove loads of usings, what is the correct way to do this? I don't want to cause a memory leak by taking out the usings, but at the same time I want to share my datacontext across different data access objects.
Should I just remove the usings, and manually call the dispose method just before I return from the web service request? If so then how go I make sure I capture everything bearing in mind I have several try-catch blocks that could get messy.
Is there another better way to do this? Should I just forget about disposing and hope everything is implicitly cleaned up?
UPDATE
The problem doesn't appear to be a performance issue... requests are handled very quickly, no more than about 200ms. In fact I have load tested it by generating lots of fake requests with no problems.
As far as I can see, it is load related for one of two reasons:
A high number of requests causes concurrent requests to affect each other
The problem happens more frequently simply because there are a lot of requests.
When the problem does occur, the application pool goes into a bad state, and requires a recycle to get it working again.
Although I would prefer the unit-of-work approach using using, sometimes it doesn't always fit into your design. Ideally you'd want to ensure that you are freeing up your SqlConnection when you're done with it so that anothe request has a chance of grabbing that connection from the pool. If that is not possible, what you would need is some assurance that the context is disposed of after each request. This could be done a couple of ways:
If you're using WebForms, you can tie the disposal of the DataContext at the end of the page lifecycle. Make a check to the HttpContext.Items collection to determine if the last page had a data context, and if so, dispose of it.
Create a dedicated IHttpModule which attaches an event to the end of the request, where you do the same as above.
The problem with both of the above solutions, is that if you are under heavy load, you'll find that a lot of requests hang about waiting for a connection to be made available, likely timing out. You'll have to weigh up the risks.
All in all, the unit-of-work approach would still be favoured, as you are releasing the resource as soon as it is no longer required.
I managed to fix this myself...
I had a base class that had a method that would create the DataContext instance, like this:
public abstract class MyBase {
protected static DataContext db = null;
protected static DataContext GetDataContext() {
return new DataContext("My Connection String");
}
// rest of class
}
And then, in the classes that inherited MyBase where I wanted to do my queries, I had statements like this:
using (db = GetDataContext()) { ... }
The thing is, I wanted to access the database from both static methods and non-static methods, and so in my base class, I'd declared the db variable as static... Big mistake!
If the DataContext variable is declared as static, during heavy loads when lots of things are happening at the same time the DataContext is shared among the requests, and if something happens on the DataContext at exactly the same time it screws up the instance of the DataContext, and the Database connection stored in the Application pool for all subsequent requested until it's recycled, and the database connection is refreshed.
So, the simple fix is to change this:
protected static DataContext db = null;
to this:
protected DataContext db = null;
...which will break all of the using statements in the static methods. But this can easily be fixed by declaring the DataContext variable in the using instead, like this:
using (DataContext db = GetDataContext()) { ... }
This happens if you have, for example, an object that references another object (i.e. a join between two tables) and you try to access the referenced object after the context has been disposed of. Something like this:
IEnumerable<Father> fathers;
using (var db = GetDataContext())
{
// Assume a Father as a field called Sons of type IEnumerable<Son>
fathers = db.Fathers.ToList();
}
foreach (var father in fathers)
{
// This will get the exception you got
Console.WriteLine(father.Sons.FirstOrDefault());
}
This can be avoided by forcing it to load all the referenced objects like this:
IEnumerable<Father> fathers;
using (var db = GetDataContext())
{
var options = new System.Data.Linq.DataLoadOptions();
options.LoadWith<Father>(f => f.Sons);
db.LoadOptions = options;
fathers = db.Fathers.ToList();
}
foreach (var father in fathers)
{
// This will no longer throw
Console.WriteLine(father.Sons.FirstOrDefault());
}
I am using Nhibernate (I am a complete noob), and what I want to be able to do is copy an entity that is loaded from the database and save it with a new Id... has anyone run into this situation? Any help would be very appreciated.
Just do new MyClass() and copy everything except the Id. You can use reflection for that.
I need to do exactly this for a very complex set of objects and what I have found so far is:
NHibernate does not exactly support this.
If you try to simply replace the Id of an object you got from a session, you will get an Nhibernate error: identifier of an instance of was altered from <9ae3868d-17bf-4314-ba0c-4eb3b44b1a2e> to <2b2b67c6-a421-48c4-836c-4c27f6481718>
If the session no longer knows about the objects it retrieved, i.e if you evict them before saving and flushing, just changing the ids will now work. So you could write code like this:
public void CloneStudent(Guid studentId)
{
// Get existing student
Student student = _session.Get<Student>(studentId);
// Copy by reference
Student newStudent = student;
// Reset Id to do quick and dirty clone
newStudent.Id = Guid.NewGuid();
newStudent.Sticker = "D";
// Must evict existing object or Nhibernate will throw object modified error
_session.Evict(student);
// Save new object
_session.Save(newStudent);
_session.Flush();
}
The problem with this is if your object graph has any depth you have to be sure to evict the entire set, and then you may need the originals in the session still you have to retrieve them again. This is a logistical headache and yields code with very obscure and convoluted intentions.
I do not recommend.
What is more commonly done is serialize to a binary stream and reconstitute this stream into a new set of objects. Fine, but only works if your objects are all serializable.
That is not the case for me, what I am doing is I wrote manual code to make deep copies of an object graph using copy constructors. This is complex and also can lead to maintenance issues, but if the objects cannot be serialized there are few better alternatives.
Sorry, deep copying objects remains a complicated task if serialization is not an option.
Recently I've been thinking about performance difference between class field members and method variables. What exactly I mean is in the example below :
Lets say we have a DataContext object for Linq2SQL
class DataLayer
{
ProductDataContext context = new ProductDataContext();
public IQueryable<Product> GetData()
{
return context.Where(t=>t.ProductId == 2);
}
}
In the example above, context will be stored in heap and the GetData method variables will be removed from Stack after Method is executed.
So lets examine the following example to make a distinction :
class DataLayer
{
public IQueryable<Product> GetData()
{
ProductDataContext context = new ProductDataContext();
return context.Where(t=>t.ProductId == 2);
}
}
(*1) So okay first thing we know is if we define ProductDataContext instance as a field, we can reach it everywhere in the class which means we don't have to create same object instance all the time.
But lets say we are talking about Asp.NET and once the users press submit button the post data is sent to the server and the events are executed and the posted data stored in a database via the method above so it is probable that the same user can send different data after one another.If I know correctly after the page is executed, the finalizers come into play and clear things from memory (from heap) and that means we lose our instance variables from memory as well and after another post, DataContext should be created once again for the new page cycle.
So it seems the only benefit of declaring it publicly to the whole class is the just number one text above.
Or is there something other?
Thanks in advance...
(If I told something incorrect please fix me.. )
When it comes to the performance difference between creating an object per method or per class instance, I wouldn't worry to much about it. However, what you seem to miss here are some important principles around the DataContext class and the unit of work pattern in general.
The DataContext class operates as a single unit of work. Thus, you create a DataContext, you create objects, update and delete objects, you submit all changes and you dispose the DataContext after that. You may create multiple DataContext classes per request, one per (business) transaction. But in ASP.NET you should never create a DataContext that survives a web request. All the DataContexts that are created during a request should be disposed when or before that request is over. There are two reasons for this.
First of all, the DataContext has an internal cache of all objects that it has fetched from the database. Using a DataContext for a long period of time will make its cache grow indefinitely and can cause memory problems when you’ve got a big database. The DataContext will also favor returning an object from cache when it can, making your objects go stale quickly. Any update and delete operation made on another DataContext or directly to the database can get unnoticed because of this staleness.
Second reason for not caching DataContexts, is that they are not thread-safe. It’s best to see a DataContext as a unit of work, or as a (business) transaction. You create a bunch of new objects, add them to the DataContext, change some others, remove some objects and when you’re done, you call SubmitChanges. If another request calls SubmitChanges on that same instance during that operation, you are losing the idea of the transaction. When you are allowing code to do this, in the most fortunate situation, your new objects will be persisted and your transaction is split up in two separate transactions. At worst, you leave your DataContext, or the objects it persists in an invalid state, which could mean other requests fail or invalid data enters your database. And this is no unlikely scenario, I’ve seen strange things happen on projects were developers created a single (static) DataContext per web site.
So with this in mind, let’s get back to your question. While defining a DataContext as instance field is not a problem, it is important to know how you are using the DataLayer class. When you create one DataLayer per request or on per method call, you’ll probably be safe, but in that case you shouldn’t store that DataLayer in a static field. When you want to do that, you should create a DataContext per method call.
It is important to know what the design of the DataLayer class is. In your code you only show us a query method. No CUD methods. Is every method meant to be a single transaction, or do you want to call multiple methods and call a SaveChanges on the DataLayer afterwards? When you want this last option, you need to store the DataContext as an instance field and in that case you should implement IDisposable on the DataLayer. When every method is its own transaction, you can create a DataContext per method and you should wrap a DataContext in a using statement. Note however, that disposing the DataContext can cause problems when you return objects with lazy loading properties from a method. Those properties cannot be loaded anymore when the DataContext is disposed. Here is more interesting information about this.
As you see, I haven’t even talked about which of your two options would be better for performance, because performance is of no importance when the solution gives inconsistent and incorrect results.
I'm sorry for my long answer :-)
You don't ever want to store a DataContext class on the class level. If you do, then you will have to implement the IDisposable interface on your class and call the Dispose method when you know you are done with it.
It's better to just create a new DataContext in your method and use a using statement to automatically dispose of it when you are done.
Even though the implementation of IDisposable on DataContext does nothing, that is an implementation detail, whereas exposing an IDisposable interface is a contract which you should always abide by.
It be especially handy if you upgrade to LINQ-to-Entities and use the ObjectContext class where you must call Dispose on the instance when you are done with it, otherwise, resources will leak until the next garbage collection.
So it seems the only benefit of
declaring it publicly to the whole
class is the just number one text
above.
Yes, declaring a class level variable is to allow the entire class to access the same variable. It should not be used to try and deliberately prevent a Garbage Collection from occurring. The access modifier on properties, methods etc. is used to determine what objects external or internal to your class can access/modify/monkey with that piece of code.
In ASP.NET once the request is sent to the browser, the created objects for that page request will get CGed at some point in time in the future, regardless of whether or not the variable is public. If you want information to stay between requests, you either need to create a singleton instance of the object, or serialize the object to either the session or application state.
See this for example - "Linq to SQL DataContext Lifetime Management": http://www.west-wind.com/weblog/posts/246222.aspx This approach makes life simpler.
In a web application that I have run across, I found the following code to deal with the DataContext when dealing with LinqToSQL
public partial class DbDataContext
{
public static DbDataContext DB
{
get
{
if (HttpContext.Current.Items["DB"] == null)
HttpContext.Current.Items["DB"] = new DbDataContext();
return (DbDataContext)HttpContext.Current.Items["DB"];
}
}
}
Then referencing it later doing this:
DbDataContext.DB.Accounts.Single(a => a.accountId == accountId).guid = newGuid;
DbDataContext.DB.SubmitChanges();
I have been looking into best practices when dealing with LinqToSQL.
I am unsure about the approach this one has taken when dealing with DataContext not being ThreadSafe and keeping a static copy of it around.
Is this a good approach to take in a web application?
#Longhorn213 - Based on what you said and the more I have read into HttpContext because of that, I think you are right. But in the application that I have inherited it is confusing this because at the beginning of each method they are requerying the db to get the information, then modifying that instance of the datacontext and submitting changes on it.
From this, I think this method should be discouraged because it is giving the false impression that the datacontext is static and persisting between requests. If a future developer thinks that requerying the data at the beginning of a method because they think it is already there, they could run into problems and not understand why.
So I guess my question is, should this method be discouraged in future development?
This is not a static copy. Note that the property retrieves it from Context.Items, which is per-request. This is a per-request copy of the DataContext, accessed through a static property.
On the other hand, this property is assuming only a single thread per request, which may not be true forever.
A DataContext is cheap to make and you won't gain much by caching it in this way.
I have done many Linq to Sql web apps and I am not sure if what you have would work.
The datacontext is supposed to track the changes you make to your objects and it will not do that in this instance.
So when you go hit submit changes, it will not know that any of your objects where updated, thus not update the database.
You have to do some extra work with the datacontext in a disconnected environment like a web application. It is hardest with an update, but not really that bad. I would not cache and just recreate it.
Also the context itself is not transactional so it is theoretically possible an update could occur on another request and your update could fail.
I prefer to create a Page base class (inherit from System.Web.UI.Page), and expose a DataContext property. It ensures that there is one instance of DataContext per page request.
This has worked well for me, it's a good balance IMHO. You can just call DataContext.SubmitChanges() at the end of the page and be assured that everything is updated. You also ensure that all the changes are for one user at a time.
Doing this via static will cause pain -- I fear DataContext will lose track of changes since it's trying to track changes for many users concurrently. I don't think it was designed for that.