I got an OutOfMemoryException earlier and couldn't figure out what it was for. It made no sense at all. Dug around in my code, and suddenly remembered that somewhere had forgotten to check for null, and in this particular case it was (and should be) exactly that. That shouldn't cause an OutOfMemoryException in my opinion, but I fixed it anywas of course. And when I did, the exception didn't appear anymore!
So I removed the check again and studied the exception I got some more. And turns out it had an InnerException of type NullReferenceException and a stack trace which of course made a lot more sense.
But why did I get an OutOfMemoryException? This has never happend to me before... makes no sense to me...
Would love to give some more context, but can't really say much without having to upload the whole project, which I can't (And which you wouldn't want to read through anyways :p). But the specific place it happend looks like this:
{
foreach (var exportParameter in exportParameters)
{
// Copy to local
var ep = exportParameter;
// Load stored values from db
...
}
int i = 1;
exportParameters
.OrderBy(ø => ø.Sequence)
.ForEach(ø => { if (!ø.Locked) ø.Sequence = i++; });
}
The fix was to put an if(exportParameters != null) before the code block. exportParameters is a List<ExportParameter>, except in the failing case in which it was null.
You might be facing the problem that Constrained Execution Regions are designed to prevent - that is, the JITting of some code that your catch clause relies on is causing the out of memory condition.
(In response to svish's comment, this is the first link when googling the phrase: http://msdn.microsoft.com/en-us/library/ms228973.aspx)
Aside from the obvious reason for getting an OOMException, you can also get it if you still have memory available, just not a big enough chunk for what is being requested. If you're getting it reliably and relatively near startup, you're probably accidentally requesting more memory than you intend to (ie. requesting a very large array). Can you post a bit of your code or at least describe your allocation pattern?
Related
So, I have a piece of code using WeakReferences. I know of the common .IsAlive race condition, so I didn't use that. I basically have something like this:
WeakReference lastString=new WeakReference(null);
public override string ToString()
{
if(lastString!=null)
{
var s=lastString.Target as string; //error here
if(s!=null)
{
return s;
}
}
var str=base.ToString();
lastString=new WeakReference(str);
return str;
}
Somehow, I'm getting a null reference exception at the marked line. By debugging it, I can confirm that lastString is indeed null, despite being wrapped in a null check and lastString never actually being set to null.
This only happens in a complex flow as well, which makes me think garbage collection is somehow taking my actual WeakReference object, and not just it's target.
Can someone enlighten me as to how this is happening and what the best course of action is?
EDIT:
I can't determine at all the cause of this. I ended up wrapping the error code in a try-catch just fix it for now. I'm very interested in the root cause of this though. I've been trying to reproduce this in a simple test case, but it's proven very difficult to do. Also, this only appears to happen when running under a unit test runner. If I take the code and trim it down to the minimum, it will continue to crash when running using TestDriven and Gallio, but will not fail when put into a console application
This ended up being a very hard to spot logic bug that was in plain sight.
The offending if statement really was more like this:
if(lastString!=null && limiter==null || limiter=lastLimiter)
The true grouping of this is more like this:
if((lastString!=null && limiter==null) || limiter=lastLimiter)
And as Murphy's law would dictate, somehow, in this one unrelated test case, lastLimiterand lastString got set to null by a method used no where but this one single test case.
So yea, no bug in the CLR, just my own logic bug that was very hard to spot
I recently had a coding bug where under certain conditions a variable wasn't being initialized and I was getting a NullReferenceException . This took a while to debug as I had to find the bits of data that would generate this to recreate it the error as the exception doesn't give the variable name.
Obviously I could check every variable before use and throw an informative exception but is there a better (read less coding) way of doing this? Another thought I had was shipping with the pdb files so that the error information would contain the code line that caused the error. How do other people avoid / handle this problem?
Thanks
Firstly: don't do too much in a single statement. If you have huge numbers of dereferencing operations in one line, it's going to be much harder to find the culprit. The Law of Demeter helps with this too - if you've got something like order.SalesClerk.Manager.Address.Street.Length then you've got a lot of options to wade through when you get an exception. (I'm not dogmatic about the Law of Demeter, but everything in moderation...)
Secondly: prefer casting over using as, unless it's valid for the object to be a different type, which normally involves a null check immediately afterwards. So here:
// What if foo is actually a Control, but we expect it to be String?
string text = foo as string;
// Several lines later
int length = text.Length; // Bang!
Here we'd get a NullReferenceException and eventually trace it back to text being null - but then you wouldn't know whether that's because foo was null, or because it was an unexpected type. If it should really, really be a string, then cast instead:
string text = (string) foo;
Now you'll be able to tell the difference between the two scenarios.
Thirdly: as others have said, validate your data - typically arguments to public and potentially internal APIs. I do this in enough places in Noda Time that I've got a utility class to help me declutter the check. So for example (from Period):
internal LocalInstant AddTo(LocalInstant localInstant,
CalendarSystem calendar, int scalar)
{
Preconditions.CheckNotNull(calendar, "calendar");
...
}
You should document what can and can't be null, too.
In a lot of cases it's near impossible to plan and account for every type of exception that might happen at any given point in the execution flow of your application. Defensive coding is effective only to a certain point. The trick is to have a solid diagnostics stack incorporated into your application that can give you meaningful information about unhandled errors and crashes. Having a good top-level (last ditch) handler at the app-domain level will help a lot with that.
Yes, shipping the PDBs (even with a release build) is a good way to obtain a complete stack trace that can pinpoint the exact location and causes of errors. But whatever diagnostics approach you pick, it needs to be baked into the design of the application to begin with (ideally). Retrofitting an existing app can be tedious and time/money-intensive.
Sorry to say that I will always make a check to verify that any object I am using in a particular method is not null.
It's as simple as
if( this.SubObject == null )
{
throw new Exception("Could not perform METHOD - SubObject is null.");
}
else
{
...
}
Otherwise I can't think of any way to be thorough. Wouldn't make much sense to me not to make these checks anyway; I feel it's just good practice.
First of all you should always validate your inputs. If null is not allowed, throw an ArgumentNullException.
Now, I know how that can be painful, so you could look into assembly rewriting tools that do it for you. The idea is that you'd have a kind of attribute that would mark those arguments that can't be null:
public void Method([NotNull] string name) { ...
And the rewriter would fill in the blanks...
Or a simple extension method could make it easier
name.CheckNotNull();
If you are just looking for a more compact way to code against having null references, don't overlook the null-coalescing operator ?? MSDN
Obviously, it depends what you are doing but it can be used to avoid extra if statements.
I get the feeling that this is, but I wanted to confirm-
is it bad form to do something like:
try
{
SqlUpload(table);
}
catch(PrimaryKeyException pke)
{
DeleteDuplicatesInTable(table);
SqlUpload(table);
}
catch(Exception ex)
{
Console.Write(ex);
}
Is it better to do this to potentially save on efficiency in case the table doesn't have duplicates, or is it better to run the delete duplicates bit anyway? (I'm assuming conditions where the table will upload as long as there are no duplicates within the table itself). Also, given how try-catch statements impact performance, would it even be faster to do it this way at all?
I apologize for the crude nature of this example, but it was just to illustrate a point.
Exceptions can be used correctly in transaction management, but it is not the case in this example. On my first look, it appeared that this seemed similar to what Linq2Sql's DataContext class does in its call to SubmitChanges(). However, that analogy was incorrect. (Please see at Chris Marisic's comment to my post for an accurate criticism of the comparison).
On exceptions
In general, if there is some issue that is likely to be encountered, you should check for it first. Exceptions should be used when a response is truly "exceptional" (meaning it is unexpected in the context of proper usage). If the proper usage of a function within a completely valid context throws an exception , then you are probably using exceptions incorrectly.
Excerpt from DataContext.SubmitChanges
This code shows an example of the correct usage of exceptions in transaction management.
Note: Just because Microsoft does it, doesn't automatically mean its right. However, their code does have a pretty good track record.
DbTransaction transaction = null;
try
{
try
{
if (this.provider.Connection.State == ConnectionState.Open)
{
this.provider.ClearConnection();
}
if (this.provider.Connection.State == ConnectionState.Closed)
{
this.provider.Connection.Open();
flag = true;
}
transaction = this.provider.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
this.provider.Transaction = transaction;
new ChangeProcessor(this.services, this).SubmitChanges(failureMode);
this.AcceptChanges();
this.provider.ClearConnection();
transaction.Commit();
}
catch
{
if (transaction != null)
{
try
{
transaction.Rollback();
}
catch
{
}
}
throw;
}
return;
}
finally
{
this.provider.Transaction = null;
if (flag)
{
this.provider.Connection.Close();
}
}
}
Yes, this type of code is considered bad form in .NET.
You would be better off writing either code similar to
if(HasPrimaryKeyViolations(table))
DeletePrimaryKeyViolations(table)
SqlUpload(table)
By reading this code I would assume that primary key violations are an exceptional case and not expected - if they are I think you should remove the duplicates beforehand, do not rely on exception handling for an expected case.
In all common languages/interpreters/compilers, exception handling is implemented to have a minimal performance impact when an exception isn't raised -- under the hood, adding an exception handler is usually just pushing a single value onto a stack or something similar. Just adding a try block wont usually have a performance impact.
On the other hand, when an exception is actually raised, things can get very slow very fast. Its the trade-off for being able to add the try block without worrying about worrying, and its usually seen as acceptable, because you only take the performance hit if something unexpected has already gone wrong somewhere else.
So, in theory, if there is a condition that you expect to happen, use an if instead. Its semantically better because it expresses to the reader that the bad condition is probably going to happen from time to time (e.g., user types in some invalid input), while the try expresses something that you hope never happens (the data source is corrupt). As above, its also going to be easier on performance.
Of course, rules are defined by their exceptions (no pun intended). In practice, there are two situations where this becomes the wrong answer:
First, if you are performing a complex operation like parsing a file, and its and all-or-nothing -- if one field in the file is corrupt, you want to bail on the whole operation. Exceptions allow you to jump out of the whole complex process up to an exception handler encapsulating the entire parse. Sure, you could litter the parsing code with checks and return values and checks on the return values -- but its going to be a lot cleaner just to throw the exception and let it rise up to the top of the operation. Even if you expect that the input is going to be bad sometimes, if there isn't a reasonable way to handle the error exactly at the point where the error occurs, use exceptions to let the error rise up to a more appropriate place to handle it. Thats really what exceptions were for in the first place -- getting rid of all that error handling code down in the details, and moving it to one, consolidated, reasonable place.
Second, a library might not let you make the choice. For example, int.TryParse is a good alternative to int.Parse if the input hasn't already been vetted. On the other hand, a library might not have a non-exception-throwing option. In that case, don't brew up your own code to check without exceptions -- though it might be bad form to use exception handling to check for an expected condition, its worse form to duplicate the functionality of the library, just to avoid an exception. Just use the try/catch, and maybe add a snide comment about how you didn't WANT to do it, but the library authors MADE you :).
For your particular example, probably stick with the exception. While exception handling isn't considered 'fast,' its still faster than a round trip to the database; and there isn't going to be a reasonable way to check for that exception without sending the command anyways. Plus, hitting a database in interfacing with an external system -- that in itself is a pretty to good reason to expect the unexpected -- exception handling almost always makes sense when you are leaving your particular controlled environment.
Or, more specifically to your example, you may consider using a stored procedure with a MERGE statement, to use the source data in table to update or insert as appropriate; the update will be a little friendlier on all fronts than doing a delete-then-insert for existing keys.
try catches are expensive on performance, so don't use them as control structures. Instead use triggers at the database level.
One problem is that any exception caused by call of SqlUpload() in the catch block causes the application to crash, because there is no further exception handling.
You'll probably get a few different opinions on this, but mine is that try...catch should be used for things that shouldn't normally happen (although sometimes it's unavoidable). So by that rule, you should ask if the duplicates are in the table by normal execution of the program or if they should not exist given allowable execution of the program.
Just to clarify, I'm saying "normal" usage of the program, not "correct" usage (e.g. when I test it the duplicates don't appear (correct usage) but when the customer uses it they do (perhaps incorrect but normal), so I need to get rid of them). It's more that the duplicates would only appear in a way that the program cannot control (e.g. sometimes someone goes into the database and adds a duplicate row (hopefully not a normal situation)).
Something like duplicate rows, though, is likely a symptom of some other bug, so you shouldn't mask it but try and find the root cause so that the deletion isn't necessary.
I have a simple question for you (i hope) :)
I have pretty much always used void as a "return" type when doing CRUD operations on data.
Eg. Consider this code:
public void Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
}
and then considen this code:
public bool Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
return true;
}
It actually just comes down to whether you should notify that something was inserted (and went well) or not ?
I typically go with the first option there.
Given your code, if something goes wrong with the insert there will be an Exception thrown.
Since you have no try/catch block around the Data Access code, the calling code will have to handle that Exception...thus it will know both if and why it failed. If you just returned true/false, the calling code will have no idea why there was a failure (it may or may not care).
I think it would make more sense if in the case where "item == null" that you returned "false". That would indicate that it was a case that you expect to happen not infrequently, and that therefore you don't want it to raise an exception but the calling code could handle the "false" return value.
As it standards, you'll return "true" or there'll be an exception - that doesn't really help you much.
Don't fight the framework you happen to be in. If you are writing C code, where return values are the most common mechanism for communicating errors (for lack of a better built in construct), then use that.
.NET base class libraries use Exceptions to communicate errors and their absence means everything is okay. Because almost all code uses the BCL, much of it will be written to expect exceptions, except when it gets to a library written as if C# was C with no support for Exceptions, each invocation will need to be wrapped in a if(!myObject.DoSomething){ System.Writeline("Damn");} block.
For the next developer to use your code (which could be you after a few years when you've forgotten how you originally did it), it will be a pain to start writing all the calling code to take advantage of having error conditions passed as return values, as changes to values in an output parameter, as custom events, as callbacks, as messages to queue or any of the other imaginable ways to communicate failure or lack thereof.
I think it depends. Imaging that your user want to add a new post onto a forum. And the adding fail by some reason, then if you don't tell the user, they will never know that something wrong. The best way is to throw another exception with a nice message for them
And if it does not relate to the user, and you already logged it out to database log, you shouldn't care about return or not any more
I think it is a good idea to notify the user if the operation went well or not. Regardless how much you test your code and try to think out of the box, it is most likely that during its existence the software will encounter a problem you did not cater for, thus making it behave incorrectly. The use of notifications, to my opinion, allow the user to take action, a sort of Plan B if you like when the program fails. This action can either be a simple work around or else, inform people from the IT department so that they can fix it.
I'd rather click that extra "OK" button than learn that something went wrong when it is too late.
You should stick with void, if you need more data - use variables for it, as either you'll need specific data (And it can be more than one number/string) and an excpetion mechanism is a good solution for handling errors.
so.. if you want to know how many rows affected, if a sp returned something ect... - a return type will limit you..
So I have this code that takes care of command acknowledgment from remote computers, sometimes (like once in 14 days or something) the following line throws a null reference exception:
computer.ProcessCommandAcknowledgment( commandType );
What really bugs me is that I check for a null reference before it, so I have no idea whats going on.
Here's the full method for what its worth:
public static void __CommandAck( PacketReader reader, SocketContext context )
{
string commandAck = reader.ReadString();
Type commandType = Type.GetType( commandAck );
Computer computer = context.Client as Computer;
if (computer == null)
{
Console.WriteLine("Client already disposed. Couldn't complete operation");
}
else
{
computer.ProcessCommandAcknowledgment( commandType );
}
}
Any clues?
Edit: ProcessCommandAcknowledgment:
public void ProcessCommandAcknowledgment( Type ackType )
{
if( m_CurrentCommand.GetType() == ackType )
{
m_CurrentCommand.Finish();
}
}
Based on the information you gave, it certainly appears impossible for a null ref to occur at that location. So the next question is "How do you know that the particular line is creating the NullReferenceException?" Are you using the debugger or stack trace information? Are you checking a retail or debug version of the code?
If it's the debugger, various setting combinations which can essentially cause the debugger to appear to report the NullRef in a different place. The main on that would do that is the Just My Code setting.
In my experience, I've found the most reliable way to determine the line an exception actually occurs on is to ...
Turn off JMC
Compile with Debug
Debugger -> Settings -> Break on Throw CLR exceptions.
Check the StackTrace property in the debugger window
I would bet money that there's a problem with your TCP framing code (if you have any!)
"PacketReader" perhaps suggests that you don't. Because, technically, it would be called "FrameReader" or something similar if you did.
If the two PC's involved are on a local LAN or something then it would probably explain the 14 days interval. If you tried this over the Internet I bet your error frequency would be much more common especially if the WAN bandwidth was contended.
Is it possible that ReadString() is returning null? This would cause GetType to fail. Perhaps you've received an empty packet? Alternatively, the string may not match a type and thus commandType would be null when used later.
EDIT:
Have you checked that m_CurrentCommand is not null when you invoke ProcessCommandAcknowledgment?
What are the other thread(s) doing?
Edit: You mention that the server is single threaded, but another comment suggests that this portion is single threaded. If that's the case, you could still have concurrency issues.
Bottom line here, I think, is that you either have a multi-thread issue or a CLR bug. You can guess which I think is more likely.
If you have optimizations turned on, it's likely pointing you to a very wrong place where it actually happens.
Something similar happened to me a few years back.
Or else a possible thread race somewhere where context gets set to null by another thread. That would also explain the uncommonness of the error.
Okay, ther are really only a few possibilities.
Somehow your computer reference is being tromped by the time you call that routine.
Something under the call is throwing the null pointer dereference error but it's being detected at that line.
Looking at it, I'm very suspicious the stack is getting corrupted, causing your computer automatic to get mangled. Check the subroutine/method/function calls around the one you have trouble with; in particular, check that what you're making into a "Computer" item really is the type you expect.
computer.ProcessCommandAcknowledgment( commandType );
Do you have debugging symbols to be able to step into this?
The null ref exception could be thrown by ProcessCommandAcknowledgement, and bubble up.