Is there a straightforward function call to determine if Concurrent GC has been enabled in the runtime I'm running in? We have a heterogeneous environment and we need to log which mode is being used so we can identify which systems need to be modified.
I realise that I can investigate the exe.config and check it manually, I was just wondering if there is a property sitting somewhere that exposes this info without having to make a hack.
If System.Runtime.GCSettings.LatencyMode has the value LatencyMode.Batch then concurrent GC has been disabled.
Sadly it doesn't look like there is a method baked in, if there was I would expect to see it here: System.Runtime.GCSettings..
Unless you have a good reason its best to leave concurrent garbage collection on (as its the default).
Generally the only things that care about this are unmanaged applications that are hosting the CLR themselves (such as SQL Server or IIS).
Is your application a good candidate for programmatically changing the gcConcurrent in the exe.config? If not, it would seem as you said, reading the value from the config is the less hackish way (IMHO).
Never the less, I did a bit of research and starting with .Net 4 you can use ETW to detect garbage collection ETW events:
Event tracing for Windows (ETW) is a tracing system that supplements
the profiling and debugging support provided by the .NET Framework.
Starting with the .NET Framework 4, garbage collection ETW events
capture useful information for analyzing the managed heap from a
statistical point of view. For example, the GCStart_V1 event, which is
raised when a garbage collection is about to occur, provides the
following information:
Which generation of objects is being collected.
What triggered the garbage collection.
Type of garbage collection (concurrent or not concurrent).
I am writing an editor in C#/.NET using AvalonDock.
If i close a document, the memory-consumption of my program doesn't decrease. Even if I call the garbage collector manually. So i assume that there is still a reference of the document somewhere.
The software is huge and the document is a very central component, so it's not easy to find every reference to it.
Does the Visual Studio 2010 debugger have a functionality to search for objects of a certain class in memory or something?
Alternatively, what would you do, if faced with such a problem?
You need to use a memory profiler to find out what objects are in memory and what holds a reference to them.
There are several different options - commercial and free.
You can do what you want using free tools.
The basic steps are as follows:
Run Your Application
Attach windbg to its process
Load the "sos" helper module (.loadby sos mscorwks)
Dump the heap (!DumpHeap -stat)
Find the type you're interested in and see if it's actually the thing using memory
Dump the heap for your particular type (!DumpHeap -type MyNameSpace.MyType)
Find the memory address of an object you think should be disposed, and see if it is "rooted" somewhere. (!gcroot "whatever the address was")
I've personally used this technique to great effect when tracking down memory leaks in graphics-intensive c# programs.
I learned this from Rico Mariani of Microsoft. Here is a blog entry that describes it in detail.
* http://blogs.msdn.com/b/ricom/archive/2004/12/10/279612.aspx
Remember that even when .net cleans itself up, windows may not decide to actually release the memory. Often it only does so when another application actually needs memory. So, use a memory profiler :)
While using SlimTune to profile a C# application, I find that when profiling native functions is enabled there are lots of entries for a function called "CoUninitializeE." CoUninitialize seems to be related to COM objects, however I'm not directly using any Com objects, and Google has no information about the version ending with an E.
Does anyone have knowledge of what this function is/how to reduce the amount of time spent on it? (For instance, is it related to memory management, so that reducing memory allocations or deallocations would help?)
Edit
It appears the function's name is actually "CoUninitializeEx" and that SlimTune is just chopping off a letter for some reason. I still would appreciate knowledge of what leads to this function being called.
CoInitalizeEx() and CoUninitialize() are pretty core in Windows programming. They respectively initialize and shutdown COM on a thread. The CLR calls these functions automatically before and after a Thread runs. It is pretty hard to avoid using COM in a .NET program, it is the basic extensibility model for native Windows code. Quite invisible, thanks to the many wrapper classes in the .NET framework that hides the plumbing.
The generic diagnostic is that you use a lot of threads. Yes, expensive. The thread pool is a workaround.
Is there a way to find out the memory usage of each dll within a c# application using com dll's? Or what would you say is the best way to find out why memory grows exponentially when using a com object (IE. Whether the COM object has a memory leak, or whether some special freeing up of objects passed to managed code has to occur(and/or how to do that)).
Are you releasing the COM object after usage(Marshal.ReleaseComObject)?
What type of parameters are you passing in/out of the calls?
If you don't have the COM object source code and want to determine why its 'leaking', Run the COM object outa proc, attach WinDBG to the process and set breakpoints on memory allocation APIs(HeapAlloc,etc...). Look at the call stack and allocation patterns. Sure you can use profilers on the managed side but if you want to know what is going on you are going to have to get your hands dirty...
A Microsoft support engineer has a fabulous blog that walks through lots of cases like this. She goes over all the tools she uses. I found it extremely helpful to read through all of her posts when I was debugging this kind of stuff a few years ago.
Edit: Apparently, she has added a series of labs that explain how to setup your environment and diagnose different problems. You may want to start here.
dotTrace rocks: http://www.jetbrains.com/profiler/
Keep in mind that all COM objects in .NET are basically MarshalByRefObject-derived classes at heart, so you should be able to look for memory consumption by such objects as one potential filter.
First thing I'd want to do is be absolutely certain that I'm not leaking references anywhere, then go into the smallest steps that will reproduce the steps (a good profiler is essential, I happen to use and recommend RedGate's ANTS Profiler) -- it can be done, and it is worth sending example code that reproduces the issue to the vendor of the COM object so they can resolve it (There is actually a hotfix for Crystal Reports as a result of a memory leak in it which I found :)
I wrote C++ for 10 years. I encountered memory problems, but they could be fixed with a reasonable amount of effort.
For the last couple of years I've been writing C#. I find I still get lots of memory problems. They're difficult to diagnose and fix due to the non-determinancy, and because the C# philosophy is that you shouldn't have to worry about such things when you very definitely do.
One particular problem I find is that I have to explicitly dispose and cleanup everything in code. If I don't, then the memory profilers don't really help because there is so much chaff floating about you can't find a leak within all the data they're trying to show you. I wonder if I've got the wrong idea, or if the tool I've got isn't the best.
What kind of strategies and tools are useful for tackling memory leaks in .NET?
I use Scitech's MemProfiler when I suspect a memory leak.
So far, I have found it to be very reliable and powerful. It has saved my bacon on at least one occasion.
The GC works very well in .NET IMO, but just like any other language or platform, if you write bad code, bad things happen.
Just for the forgetting-to-dispose problem, try the solution described in this blog post. Here's the essence:
public void Dispose ()
{
// Dispose logic here ...
// It's a bad error if someone forgets to call Dispose,
// so in Debug builds, we put a finalizer in to detect
// the error. If Dispose is called, we suppress the
// finalizer.
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~TimedLock()
{
// If this finalizer runs, someone somewhere failed to
// call Dispose, which means we've failed to leave
// a monitor!
System.Diagnostics.Debug.Fail("Undisposed lock");
}
#endif
We've used Ants Profiler Pro by Red Gate software in our project. It works really well for all .NET language-based applications.
We found that the .NET Garbage Collector is very "safe" in its cleaning up of in-memory objects (as it should be). It would keep objects around just because we might be using it sometime in the future. This meant we needed to be more careful about the number of objects that we inflated in memory. In the end, we converted all of our data objects over to an "inflate on-demand" (just before a field is requested) in order to reduce memory overhead and increase performance.
EDIT: Here's a further explanation of what I mean by "inflate on demand." In our object model of our database we use Properties of a parent object to expose the child object(s). For example if we had some record that referenced some other "detail" or "lookup" record on a one-to-one basis we would structure it like this:
class ParentObject
Private mRelatedObject as New CRelatedObject
public Readonly property RelatedObject() as CRelatedObject
get
mRelatedObject.getWithID(RelatedObjectID)
return mRelatedObject
end get
end property
End class
We found that the above system created some real memory and performance problems when there were a lot of records in memory. So we switched over to a system where objects were inflated only when they were requested, and database calls were done only when necessary:
class ParentObject
Private mRelatedObject as CRelatedObject
Public ReadOnly Property RelatedObject() as CRelatedObject
Get
If mRelatedObject is Nothing
mRelatedObject = New CRelatedObject
End If
If mRelatedObject.isEmptyObject
mRelatedObject.getWithID(RelatedObjectID)
End If
return mRelatedObject
end get
end Property
end class
This turned out to be much more efficient because objects were kept out of memory until they were needed (the Get method was accessed). It provided a very large performance boost in limiting database hits and a huge gain on memory space.
You still need to worry about memory when you are writing managed code unless your application is trivial. I will suggest two things: first, read CLR via C# because it will help you understand memory management in .NET. Second, learn to use a tool like CLRProfiler (Microsoft). This can give you an idea of what is causing your memory leak (e.g. you can take a look at your large object heap fragmentation)
Are you using unmanaged code? If you are not using unmanaged code, according to Microsoft, memory leaks in the traditional sense are not possible.
Memory used by an application may not be released however, so an application's memory allocation may grow throughout the life of the application.
From How to identify memory leaks in the common language runtime at Microsoft.com
A memory leak can occur in a .NET
Framework application when you use
unmanaged code as part of the
application. This unmanaged code can
leak memory, and the .NET Framework
runtime cannot address that problem.
Additionally, a project may only
appear to have a memory leak. This
condition can occur if many large
objects (such as DataTable objects)
are declared and then added to a
collection (such as a DataSet). The
resources that these objects own may
never be released, and the resources
are left alive for the whole run of
the program. This appears to be a
leak, but actually it is just a
symptom of the way that memory is
being allocated in the program.
For dealing with this type of issue, you can implement IDisposable. If you want to see some of the strategies for dealing with memory management, I would suggest searching for IDisposable, XNA, memory management as game developers need to have more predictable garbage collection and so must force the GC to do its thing.
One common mistake is to not remove event handlers that subscribe to an object. An event handler subscription will prevent an object from being recycled. Also, take a look at the using statement which allows you to create a limited scope for a resource's lifetime.
This blog has some really wonderful walkthroughs using windbg and other tools to track down memory leaks of all types. Excellent reading to develop your skills.
I just had a memory leak in a windows service, that I fixed.
First, I tried MemProfiler. I found it really hard to use and not at all user friendly.
Then, I used JustTrace which is easier to use and gives you more details about the objects that are not disposed correctly.
It allowed me to solve the memory leak really easily.
If the leaks you are observing are due to a runaway cache implementation, this is a scenario where you might want to consider the use of WeakReference. This could help to ensure that memory is released when necessary.
However, IMHO it would be better to consider a bespoke solution - only you really know how long you need to keep the objects around, so designing appropriate housekeeping code for your situation is usually the best approach.
I prefer dotmemory from Jetbrains
Big guns - Debugging Tools for Windows
This is an amazing collection of tools. You can analyze both managed and unmanaged heaps with it and you can do it offline. This was very handy for debugging one of our ASP.NET applications that kept recycling due to memory overuse. I only had to create a full memory dump of living process running on production server, all analysis was done offline in WinDbg. (It turned out some developer was overusing in-memory Session storage.)
"If broken it is..." blog has very useful articles on the subject.
After one of my fixes for managed application I had the same thing, like how to verify that my application will not have the same memory leak after my next change, so I've wrote something like Object Release Verification framework, please take a look on the NuGet package ObjectReleaseVerification. You can find a sample here https://github.com/outcoldman/OutcoldSolutions-ObjectReleaseVerification-Sample, and information about this sample http://outcoldman.com/en/blog/show/322
The best thing to keep in mind is to keep track of the references to your objects. It is very easy to end up with hanging references to objects that you don't care about anymore.
If you are not going to use something anymore, get rid of it.
Get used to using a cache provider with sliding expirations, so that if something isn't referenced for a desired time window it is dereferenced and cleaned up. But if it is being accessed a lot it will say in memory.
One of the best tools is using the Debugging Tools for Windows, and taking a memory dump of the process using adplus, then use windbg and the sos plugin to analyze the process memory, threads, and call stacks.
You can use this method for identifying problems on servers too, after installing the tools, share the directory, then connect to the share from the server using (net use) and either take a crash or hang dump of the process.
Then analyze offline.
From Visual Studio 2015 consider to use out of the box Memory Usage diagnostic tool to collect and analyze memory usage data.
The Memory Usage tool lets you take one or more snapshots of the managed and native memory heap to help understand the memory usage impact of object types.
one of the best tools I used its DotMemory.you can use this tool as an extension in VS.after run your app you can analyze every part of memory(by Object, NameSpace, etc) that your app use and take some snapshot of that, Compare it with other SnapShots.
DotMemory