This question already has answers here:
Does the CLR garbage collection methodology mean it's safe to throw circular object references around?
(2 answers)
Closed 2 years ago.
There are two scenario that i trying to understand how will GC will act
1- There is two object - object1 and object2
object1 has reference on object2 and object2 has reference on object1
Now, both of those object are not in use and GC can collect them.
What will happened ? does GC skip on this collection ? ?
2- Same question but we have 4 ( or n ) objects that have reference on each other.
What GC will do on this case ???
Unlike COM, the common language runtime does not use reference counting to govern object lifetime. Instead, the garbage collector traces object references and identifies objects that can no longer be accessed by running code.
This simplifies component programming a great deal, because you do not have to worry about circular references. If a group of objects contain references to each other, but none of these object are referenced directly or indirectly from stack or shared variables, then garbage collection will automatically reclaim the memory.
http://msdn.microsoft.com/en-us/library/0t81zye4(v=vs.71).aspx
The GC used by .NET is a tracing garbage collector ("Mark and Sweep" is a related term)
Memory objects are considered "garbage" if they can no longer be reached by following pointers/references from the non-garbage part of your program's memory.
To determine what is reachable and what is not, the GC first establishes a set of root references/pointers. Those are references that are guaranteed to be reachable. Examples include local variables and static fields.
It then follows these references recursively (traces) and marks each object it encounters as "not garbage". Once it runs out of references to follow, it enters the "sweep" phase where every object that has not been marked as "non garbage" is freed (which might include invoking the object's finalizer).
So as soon as none of the objects in your "object ring" is referenced by any part of your "live" objects, it will be garbage collected.
AFAIK, the GC will only collect objects which no longer have any references to them. So as long as some object has a reference to it, it won't be collected.
If the objects exist for long enough, they will end up in the next phase of the collection cycle.
i'm not sure but you could just try it by creating massive amounts of objects (4 threads,filling lists) and then nulling the lists.
if the ram goes down,it knows that there is no reference to the block of referenced objects anymore,if it doesn't it doesn't :)
Duplicate of Does the CLR garbage collection methodology mean it's safe to throw circular object references around?. .NET's GC isn't a reference counter, it goes over all allocated objects and attempts to link them to a GC root object. It can handle and dispose these circular references, no matter how many of them there are.
Related
A few years ago I read the book the CLR via C# and the other day I got asked whether an array and still got a bit puzzled, the question was to figure out when the array in the method below is available to garbage collection:
public static double ThingRatio()
{
var input = new [] { 1, 1, 2 ,3, 5 ,8 };
var count = input.Length;
// Let's suppose that the line below is the last use of the array input
var thingCount = CountThings(input);
input = null;
return (double)thingCount / count;
}
According to the answer given here: When is an object subject to garbage collection? which states:
They will both become eligible for collection as soon as they are not
needed anymore. This means that under some circumstances, objects can
be collected even before the end of the scope in which they were
defined. On the other hand, the actual collection might also happen
much later.
I would tend to say that starting after line 6 (i.e. input = null;) the array becomes subject to GC but I am not that sure... (I mean the array is supposedly surely no longer needed after the assignment, but also struggling that it's after the CountThings call but at the same time the array is "needed" for the null assignment).
Remember objects and variables are not the same thing. A variable has a scope to particular method or type, but the object it refers to or used to refer to has no such concept; it's just a blob of memory. If the GC runs after input = null; but before the end of the method, the array is just one more orphaned object. It's not reachable, and therefore eligible for collection.
And "reachable" (rather then "needed" ) is the key word here. The array object is no longer needed after this line: var thingCount = CountThings(input);. However, it's still reachable, and so could not be collected at that point...
We also need to remember it isn't collected right away. It's only eligible to be collected. As a practical matter, I've found the .Net runtime doesn't tend to invoke the GC in the middle of a user method unless it really has to. Generally speaking, it is not needed or helpful to set a variable to null early, and in some rare cases can even be harmful.
Finally, I'll add that the code we read and write is not the same code actually used by the machine. Remember, there is also a compile step to translate all this to IL, and later a JIT process to create the final machine code that really runs. Even concept of one line following next is already an abstraction away from what actually happens. One line may expand to be several lines of actual IL, or in some cases even be re-written to involve all new compiler-generated types as with closures or iterator blocks. So everything here is really only referring to the simple case.
GC Myth: setting an object's reference to null will force the GC to collect it right away.
GC Truth: setting an object's reference to null will sometimes allow the GC to collect it sooner.
Taking part of the blogpost I'm referencing below and applying it to your question, the answer is as follows:
The JIT is usually smart enough to realize that input = null can be optimized away. That leaves CountThings(input) as the last reference to the object. So after that call, the input is no longer used and is removed as a GC Root. That leaves the Object in memory orphaned (no references pointing to it), making it eligible for collection. When the GC actually goes about collecting it, is another matter.
More information to be found at To Null or Not to Null
No object can be garbage-collected while it is recognized as existing. An object will exist in .NET for as along as any reference to it exists or it has a registered finalizer, and will cease to exist once neither condition applies. References in objects will exist as long as the objects themselves exist, and references in automatic variables will exist as long as there is any means via which they will be observed. If the garbage collector detects that the only references to an object with no registered finalizer are held in weak references, those references will be destroyed, causing the object to cease to exist. If the garbage collector detects that the only references to an object with a registered finalizer are held in weak references, any weak references whose "track resurrection" property is false, a reference to the object will be placed in a strongly-rooted list of objects needing "immediate" finalization, and the finalizer will be unregistered (thus allowing it to cease to exist if and when the finalizer reaches a point in execution where no reference to the object could ever be observed).
Note that some sources confuse the triggering of an object's finalizer with garbage-collection, but an object whose finalizer is triggered is guaranteed to continue to exist for at least as long as that finalizer takes to execute, and may continue to exist indefinitely if any references to it exist when the finalizer finishes execution.
In your example, there are three scenarios that could apply, depending upon what CountThings does with the passed-in reference:
If CountThings does not store a copy of the reference anywhere, or any copies of references that it does store get overwritten before input gets overwritten, then it will cease to exist as soon as input gets overwritten or ceases to exist [automatic-duration variables may cease to exist any time a compiler determines that their value will no longer be observed].
If CountThings stores a copy of the reference somewhere that continues to exist after it returns, and the last extant reference is held by something other than a weak reference, then the object will cease to exist as soon as the last reference is destroyed.
If the last existing reference the array ends up being held in a weak reference, the array will continue to exist until the first GC cycle where that is the case, whereupon the weak reference will be cleared, causing the array to cease to exist. Note that the lack of non-weak references to the array will only be relevant when a GC cycle occurs. It is possible (and not particularly uncommon) for a program to store a copy of a reference into a WeakReference, ConditionalWeakTable, or other object holding some form of weak reference, destroy all other copies, and then read out the weak reference to produce a non-weak copy of the reference before the next GC cycle. If that occurs, the system will neither know nor care that there was a time when non non-weak copies of the reference existed. If the GC cycle occurs before the reference gets read out, however, then code which later examines the weak reference will find it blank.
A key observation is that while finalizers and weak references complicate things slightly, the only way in which the GC destroys objects is by invalidating weak forms of references. As far as the GC is concerned, the only kinds of storage that exist when the system isn't actually performing a GC cycle are those used by objects that exist, those used for .NET's internal purposes, and regions of storage that are available to satisfy future allocations. If an object is created, the storage it occupied will cease to be a region of storage available for future allocations. If the object later ceases to exist, the storage that had contained the object will also cease to exist in any form the GC knows about until the next GC cycle. The next GC cycle won't destroy the object (which had already ceased to exist), but will instead add the storage which had contained it back to its list of areas that are available to add future allocations (causing that storage to exist again).
Today I have seen a piece of code that first seemed odd to me at first glance and made me reconsider. Here is a shortened version of the code:
if(list != null){
list.Clear();
list = null;
}
My thought was, why not replace it simply by:
list = null;
I read a bit and I understand that clearing a list will remove the reference to the objects allowing the GC to do it's thing but will not "resize". The allocated memory for this list stays the same.
On the other side, setting to null would also remove the reference to the list (and thus to its items) also allowing the GC to do it's thing.
So I have been trying to figure out a reason to do it the like the first block. One scenario I thought of is if you have two references to the list. The first block would clear the items in the list so even if the second reference remains, the GC can still clear the memory allocated for the items.
Nonetheless, I feel like there's something weird about this so I would like to know if the scenario I mentioned makes sense?
Also, are there any other scenarios where we would have to Clear() a list right before setting the reference to null?
Finally, if the scenario I mentioned made sense, wouldn't it be better off to just make sure we don't hold multiple references to this list at once and how would we do that (explicitly)?
Edit: I get the difference between Clearing and Nulling the list. I'm mostly curious to know if there is something inside the GC that would make it so that there would be a reason to Clear before Nulling.
The list.Clear() is not necessary in your scenario (where the List is private and only used within the class).
A great intro level link on reachability / live objects is http://levibotelho.com/development/how-does-the-garbage-collector-work :
How does the garbage collector identify garbage?
In Microsoft’s
implementation of the .NET framework the garbage collector determines
if an object is garbage by examining the reference type variables
pointing to it. In the context of the garbage collector, reference
type variables are known as “roots”. Examples of roots include:
A reference on the stack
A reference in a static variable
A reference in another object on the managed heap that is not eligible for garbage
collection
A reference in the form of a local variable in a method
The key bit in this context is A reference in another object on the managed heap that is not eligible for garbage collection. Thus, if the List is eligible to be collected (and the objects within the list aren't referenced elsewhere) then those objects in the List are also eligible to be collected.
In other words, the GC will realise that list and its contents are unreachable in the same pass.
So, is there an instance where list.Clear() would be useful? Yes. It might be useful if you have two references to a single List (e.g. as two fields in two different objects). One of those references may wish to clear the list in a way that the other reference is also impacted - in which list.Clear() is perfect.
This answer started as a comment for Mick, who claims that:
It depends on which version of .NET you are working with. On mobile platforms like Xamarin or mono, you may find that the garbage collector needs this kind of help in order to do its work.
That statement is begging to be fact checked. So, let us see...
.NET
.NET uses a generational mark and sweep garbage collector. You can see the abstract of the algorithm in What happens during a garbage collection
. For summary, it goes over the object graph, and if it cannot reach a object, that one can be erased.
Thus, the garbage collector will correctly identify the items of the list as collectible in the same iteration, regardless of whatever or not you clear the list. There is no need to decouple the objects beforehand.
This means that clearing the list does not help the garbage collector on the regular implementation of .NET.
Note: If there were another reference to the list, then the fact that you cleared the list would be visible.
Mono and Xamarin
Mono
As it turns out, the same is true for Mono.
Xamarin.Android
Also true for Xamarin.Android.
Xamarin.iOS
However, Xamarin.iOS requires additional considerations. In particular, MonoTouch will use wrapped Objective-C objects which are beyond the garbage collector. See Avoid strong circular references under iOS Performance. These objects require different semantics.
Xamarin.iOS will minimize the use of Objetive-C objects by keeping a cache:
C# NSObjects are also created on demand when you invoke a method or a property that returns an NSObject. At this point, the runtime will look into an object cache and determine whether a given Objective-C NSObject has already been surfaced to the managed world or not. If the object has been surfaced, the existing object will be returned, otherwise a constructor that takes an IntPtr as a parameter is invoked to construct the object.
The system keeps these objects alive even there are no references from managed code:
User-subclasses of NSObjects often contain C# state so whenever the Objective-C runtime performs a "retain" operation on one of these objects, the runtime creates a GCHandle that keeps the managed object alive, even if there are no C# visible references to the object. This simplifies bookeeping a lot, since the state will be preserved automatically for you.
Emphasis mine.
Thus, under Xamarin.iOS, if there were a chance that the list might contain wrapped Objetive-C objects, this code would help the garbage collector.
See the question How does memory management works on Xamarin.IOS, Miguel de Icaza explains in his answer that the semantics are to "retain" the object when you take a reference and "release" it when the reference is null.
On the Objetive-C side, "release" does not mean to destroy the object. Objetive-C uses a reference count garbage collector. When we "retain" the object the counter is incremented and when we "release" the counter is decreased. The system destroys the object when the counter reaches zero. See: About Memory Management.
Therefore, Objetive-C is bad at handling circular references (if A references B and B references A, their reference count is not zero, even if they cannot be reached), thus, you should avoid them in Xamarin.iOS. In fact, forgetting to decouple references will lead to leaks in Xamarin.iOS... See: Xamarin iOS memory leaks everywhere.
Others
dotGNU also uses a generational mark and sweep garbage collector.
I also had a look at CrossNet (that compiles IL to C++), it appears they attempted to implement it too. I do not know how good it is.
It depends on which version of .NET you are working with. On mobile platforms like Xamarin or mono, you may find that the garbage collector needs this kind of help in order to do its work. Whereas on desktop platforms the garbage collector implementation may be more elaborate. Each implementation of the CLI out there is going to have it's own implementation of the garbage collector and it is likely to behave differently from one implementation to another.
I can remember 10 years ago working on a Windows Mobile application which had memory issues and this sort of code was the solution. This was probably due to the mobile platform requiring a garbage collector that was more frugal with processing power than the desktop.
Decoupling objects helps simplify the analysis the garbage collector needs to do and helps avoid scenarios where the garbage collector fails to recognise a large graph of objects has actually become disconnected from all the threads in your application. Which results in memory leaks.
Anyone who believes you can't have memory leaks in .NET is an inexperienced .NET developer. On desktop platforms just ensuring Dispose is called on objects which implement them may be enough, however with other implementations you may find it is not.
List.Clear() will decouple the objects in the list from the list and each other.
EDIT: So to be clear I'm not claiming that any particular implementation currently out there is susceptible to memory leaks. And again depending on when this answer is read the robustness of the garbage collector on any implementation of the CLI currently out there could have changed since the time writing this.
Essentially I'm suggesting if you know that your code needs to be cross platform and used across many implementations of the .NET framework, especially implementations of the .NET framework for mobile devices, it could be worth investing time into decoupling objects when they are no longer required. In that case I'd start off by adding decoupling to classes that already implement Dispose, and then if needed look at implementing IDisposable on classes that don't implement IDisposable and ensuring Dispose is called on those classes.
How to tell for sure if it's needed? You need to instrument and monitor the memory usage of your application on each platform it is to be deployed on. Rather than writing lots of superfluous code, I think the best approach is to wait until your monitoring tools indicate you have memory leaks.
As mentioned in the docs:
List.Clear Method (): Count is set to 0, and references to other
objects from elements of the collection are also released.
In your 1st snippet:
if(list != null){
list.Clear();
list = null;
}
If you just set the list to null, it means that you release the reference of your list to the actual object in the memory (so the list itself is remain in the memory) and waiting for the Garbage Collector comes and release its allocated memory.
But the problem is that your list may contain elements that hold a reference to another objects, for example:
list → objectA, objectB, objectC
objectB → objectB1, objectB2
So, after setting the list to null, now list has no reference and it should be collected by Garbage Collector later, but objectB1 and objectB2 has a reference from objectB (still be in the memory) and because of that, Garbage Collector need to analyse the object reference chain. To make it less confusing, this snippet use .Clear() function to remove this confusion.
Clearing the list ensures that if the list is not garbage collected for some reason, then at the very least, the elements it contained can still be disposed of.
As stated in the comments, preventing other references to the list from existing requires careful planning, and clearing the list before nulling it doesn't incur a big enough performance hit to justify trying to avoid doing so.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
CLR garbage collector actively goes through all objects that have been created and works out if they are being used. But, how does garbage collector decide which object are to be killed and which are in use?
I understand the concept of assigning a null value to object will suffice. But, what if I write only
string obj = new string(new char[] {'a'});
and not null assignment lineobj = null;.
How will garbage collector determine when to clean it?
The CLR Garbage Collector is (at its core) a so-called tracing GC. (The other "big" class of garbage collectors are so-called reference-counting GCs.)
Tracing GCs work by, well recursively "tracing" the set of reachable objects from a set of objects that are already known to be reachable. Here's how that works:
Assume that we already have a set of objects that we know are reachable. For every object in that set, follow all the references (e.g. fields, and also internal pointers such as the class pointer etc.). Those objects are also reachable. Repeat until you have visited all objects at least once. Now you know all reachable objects. (We could say that we have computed the transitive closure with respect to reachability.) All objects that you haven't visited are not reachable and thus eligible for garbage collection.
Now, we just have to figure out how to start this algorithm, i.e. how do get the first set of objects that are known to be reachable. Well, every language usually has a set of objects that are known to be always reachable. This set from which we are starting our trace from, is called the root set. It includes things like:
globals
pointers in CPU registers
objects referenced by local variables on the stack
objects on the stack
Thread-Local Storage
unsafe memory
native memory
VM-internal data structures
the root namespace
…
That's it.
There are, of course, many variations of this theme. The most simple implementation of this tracing idea is called mark-sweep. It has two phases, mark and sweep (duh!) The mark phase is the tracing phase, you trace the reachable objects, and then you set a bit in the object header which says "yep, reachable". In the sweep phase, you collect all objects which don't have the bit set and reset the bit to false in the other objects.
A slight improvement of this scheme is to keep a separate marking table. For one, you don't have to write all over the entire RAM just to set those marking bits (which throws all data out of the cache, for example, and also triggers a copy-on-write if the memory is shared with another process). And secondly, you don't have to visit the reachable objects to reset the marking bit, you can just throw away the marking table after you're done.
The biggest dis-advantage of this scheme is that it leads to memory fragmentation. The biggest advantage is that objects don't move around in memory, which means for example that you can hand out pointers to objects without fear that those pointers may become invalid.
Another, very simple scheme, is Henry Baker's semi-space copying collector. It is called "semi-space" because it always uses at most 50% of the allocated memory. It is also a tracing collector, but it is a copying collector instead of mark-sweep. Instead of marking the objects when visiting them, it copies them over to the empty half of the memory. Afterwards, the old half can be simply freed in constant time.
The advantage is that everytime you copy the objects, they will be neatly tightly packed in memory without holes, so there is no fragmentation. But, they move around in memory, so you cannot just hand out pointers to those objects.
Note: the CLR's Garbage Collectors (it actually has two of them!) are much more complex and sophisticated than those two schemes I presented. They are, however, both tracing GCs.
The second big class of collectors are reference-counting collectors. Instead of tracing references only when a collection occurs, they count references, everytime a reference is created or destroyed. So, when you assign an object to a local variable, or a field, or pass it as an argument, …, the system increments a reference counter in the object header, and everytime you assign a different object to a local variable, or the local variable goes out of scope, or the object that the field belongs to gets GCd, …, the reference is decremented. If the reference count hits 0, there are no more references, and the object is eligible for garbage collection.
The big advantage of this scheme is that you always know exactly when an object becomes unreachable. The big disadvantage is that you can get disconnected cycles whose reference count(s) will never be 0. If you have a reference from A → B, from B → C, from C → A, and from D → B, then A's reference count is 1, B's reference count is 2, C's reference count is 1. If you now remove the reference from D, B's reference count drops to 1, and there is no reference from the rest of the system to either A or B or C, so they are all not reachable, but their reference count will never drop to 0, so they will never be collected.
A third big idea in GC is the Generational Hypothesis:
Objects die young
Older objects don't reference younger objects
As it turns out, for typical systems, this is true for almost all objects. Which means, it makes sense to treat objects differently depending on their age. A generational GC divides the objects into different generations, and has different garbage collection and memory allocation strategies for each. (Let's leave it at that.)
For more information about garbage collection in general, you should read The Garbage Collection Handbook – The art of automatic memory management by Richard Jones, Antony Hosking, Eliot Moss.
This question already has an answer here:
When is an object subject to garbage collection?
(1 answer)
Closed 7 years ago.
I'm still somewhat confused by C#'s garbage collector. Documentation states that an object is garbage collected when there are no more references pointing to it.
So my question is, why is this object not immediately garbage collected? There is no reference to it. Let's assume that the background class itself creates no references either.
public void StartGame() {
// the background instance is created, but no reference is kept:
new Background("landscape.png", speed);
}
What you are describing is of more resemblance to memory management through reference counting, that checks whether to free the memory when the reference counter is accessed, typically when the referencing object is constructed or destroyed.
Garbage collection is a slightly different concept. Directly freeing the objects in GC-powered environments is typically not allowed or not recommended. Instead, a garbage collector is run occasionally (the runtime decides when to do that), that finds all the no longer referenced objects and frees the memory taken by these. The point is, you should not bother (because you can do nothing about it) when exactly it's going to happen.
The garbage collector is like the garbage truck that drives around your neighborhood. You can't will it to pick up your trash once you put it on the side of the street. You have to wait for it to come on its own terms.
Garbage collectors are theoretically really simple: stop the world, determine what's no longer being used, collect, resume the world. But because this takes time, developers use complex algorithms to decide when the collector kicks in and what it collects, to make your program run as smoothly as possible. Some garbage is usually not a problem that affects your program.
If you are expecting your object to be collected as soon as it goes out of scope, and you're probably using a finalizer to test this. Don't! Instead, implement IDisposable and call the Dispose method yourself, as soon as you're done with it. You can't rely on the garbage collector to collect your object at any time, if ever. That's why the BCL I/O classes all implement IDisposable for flushing streams, closing connections and cleaning up memory.
Or if you want to keep your object around, you need to keep an (indirect) reference to it. The object might just be collected on the next garbage collection cycle.
Well, there's one not recommended way to force the garbage truck to collect your garbage, using GC.Collect:
GC.Collect();
This will temporary stop your program to collect all garbage. However, this might still not clear your Background object while it's living on the stack or some other place. You can't predict where the runtime will put your object and when it will release it, so be sure to at least exit the method that created the object before testing whether it is collected using GC.Collect.
The garbage collector in C# is invoked only at certain moments and is generational. This means that an object that has no reference at the first pass of the GC will be upgraded by 1 generation. The lowest generation is garbage collected way more often than the rest.
You may read this article to understand more: https://msdn.microsoft.com/en-us/library/ee787088%28v=vs.110%29.aspx
You may alternatively call the GC.Collect method, but this is not recommended as .NET is pretty much able to handle its own memory and invoking this method kinda defeats the whole purpose.
Well if you want to check if an object will be garbage collected you can do the following to test in your code if it is eligible for collection by doing the following.
Modify your method a little for testing alternatively you could make reference a field that is set in the StartGame Method.
public void StartGame(out WeakReference reference)
{
reference = new WeakReference(new Background("landscape.png", speed));
}
After your method is called you can do the following to see if it is eligible for collection.
WeakReference reference;
StartGame(out reference);
GC.Collect();
GC.WaitForPendingFinalizers();
if (reference.IsAlive)
{
Console.WriteLine("Background is not eligible for collection");
}
This is only for testing, you shouldn't be calling the Garbage collector otherwise.
The basic difference is that weak references are supposed to be claimed on each run of the GC (keep memory footprint low) while soft references ought to be kept in memory until the GC actually requires memory (they try to expand lifetime but may fail anytime, which is useful for e.g. caches especially of rather expensive objects).
To my knowledge, there is no clear statement as to how weak references influence the lifetime of an object in .NET. If they are true weak refs they should not influence it at all, but that would also render them pretty useless for their, I believe, main purpose of caching (am I wrong there?). On the other hand, if they act like soft refs, their name is a little misleading.
Personally, I imagine them to behave like soft references, but that is just an impression and not founded.
Implementation details apply, of course. I'm asking about the mentality associated with .NET's weak references - are they able to expand lifetime, or do they behave like true weak refs?
(Despite a number of related questions I could not find an answer to this specific issue yet.)
Are C# weak references in fact soft?
No.
am I wrong there?
You are wrong there. The purpose of weak references is absolutely not caching in the sense that you mean. That is a common misconception.
are they able to expand lifetime, or do they behave like true weak refs?
No, they do not expand lifetime.
Consider the following program (F# code):
do
let x = System.WeakReference(Array.create 0 0)
for i=1 to 10000000 do
ignore(Array.create 0 0)
if x.IsAlive then "alive" else "dead"
|> printfn "Weak reference is %s"
This heap allocates an empty array that is immediately eligible for garbage collection. Then we loop 10M times allocating more unreachable arrays. Note that this does not increase memory pressure at all so there is no motivation to collect the array referred to by the weak reference. Yet the program prints "Weak reference is dead" because it was collected nevertheless. This is the behaviour of a weak reference. A soft reference would have been retained until its memory was actually needed.
Here is another test program (F# code):
open System
let isAlive (x: WeakReference) = x.IsAlive
do
let mutable xs = []
while true do
xs <- WeakReference(Array.create 0 0)::List.filter isAlive xs
printfn "%d" xs.Length
This keeps filtering out dead weak references and prepending a fresh one onto the front of a linked list, printing out the length of the list each time. On my machine, this never exceeds 1,000 surviving weak references. It ramps up and then falls to zero in cycles, presumably because all of the weak references are collected at every gen0 collection. Again, this is the behaviour of a weak reference and not a soft reference.
Note that this behaviour (aggressive collection of weakly referenced objects at gen0 collections) is precisely what makes weak references a bad choice for caches. If you try to use weak references in your cache then you'll find your cache getting flushed a lot for no reason.
I have seen no information that indicates that they would increase the lifetime of the object they point to. And the articles I read about the algorithm the GC uses to determine reachability do not mention them in this way either. So I expect them to have no influence on the lifetime of the object.
Weak
This handle type is used to track an object, but allow it to be collected. When an object is collected, the contents of the GCHandle are zeroed. Weak references are zeroed before the finalizer runs, so even if the finalizer resurrects the object, the Weak reference is still zeroed.
WeakTrackResurrection
This handle type is similar to Weak, but the handle is not zeroed if the object is resurrected during finalization.
http://msdn.microsoft.com/en-us/library/83y4ak54.aspx
There are a few mechanism by which an object that's unreachable can survive a garbage collection.
The generation of the object is larger than the generation of the GC that happened. This is particularly interesting for large objects, which are allocated on the large-object-heap and are always considered Gen2 for this purpose.
Objects with a finalizer and all objects reachable from them survive the GC.
There might be a mechanism where former references from old objects can keep young objects alive, but I'm not sure about that.
Yes
Weak references do not extend the lifespan of an object, thus allowing it to be garbage collected once all strong references have gone out of scope. They can be useful for holding on to large objects that are expensive to initialize but should be avaialble for garabage collection if they are not actively in use.