Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'd like to clarify an issue with CLR managed heaps. There are 2 object managed heaps - Large Object Heap and Small Object Heap. I know that objects which size is more than 85kbytes will be putted to the LOH. SOH has 3 Generations (0,1,2). The LOH is a part of SOH(2nd generation objects) or a separate heap with objects which always are 2nd gen? The LOH should be cleared along with 2nd gen object of the SOH?
Small Object Heap has generations that are checked from time to time. At the end of collection this heap is fragmented so it need to be compacte. If Large Objects were in this heep it would take long time for defragmentation. So they decided to have another heap Large Object Heap that will be exempt from this expensive operation of defragmenting.
There is really good book:
ftp://support.red-gate.com/ebooks/under-the-hood-of-net-memory-management-part1.pdf
page 55 LOH
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am curious about how clr knows particular object is not used by any other object and it is dead we know the basic of garbage collector but internally how clr find dead object.How clr knows the objects are in dead position.
https://msdn.microsoft.com/en-us/library/ee787088(v=vs.110).aspx#Anchor_4
A garbage collection has the following phases:
A marking phase that finds and creates a list of all live objects.
A relocating phase that updates the references to the objects that
will be compacted.
A compacting phase that reclaims the space occupied by the dead
objects and compacts the surviving objects. The compacting phase
moves objects that have survived a garbage collection toward the
older end of the segment.
The garbage collector uses the following information to determine whether objects are live:
Stack roots. Stack variables provided by the just-in-time (JIT)
compiler and stack walker.
Garbage collection handles. Handles that point to managed objects and
that can be allocated by user code or by the common language runtime.
Static data. Static objects in application domains that could be
referencing other objects. Each application domain keeps track of its
static objects.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to set automatically cleaning method for a desktop running application, because it throw an error "out of memory".
Is there any way to do this?
.
There is already an "automatic cleaning method"; the GC. You should virtually never need to tell it what to do - it understands memory more than most people do. If your code is throwing OOM, you need to investigate why; for example, are you leaking objects? (static event handlers are notorious for this); are you asking for huge slabs of contiguous memory (huge arrays, etc)? are you asking for an array that is more than 2 GiB (without large array support enabled)? are you running on 32-bit and just using lots of memory? is it actually not really an OOM condition, but really GDI+ handle exhaustion (which demonstrates in the same way)?
The first thing to check is how much memory your process is using - and how much free memory the OS has - when it throws OOM. If there is plenty of free memory, it isn't actually OOM (unless you're using over 1 GiB on a 32-bit system, in which case all bets are off).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
How hashset memory overhead depends on objects size? And what if objects have different size?
The size of the HashSet is going to be related to the size of a variable of the type of objects that the HashSet holds. So for all reference types, that's the size of a reference, regardless of which type it is.
And what if objects have different size?
A HashSet can only store objects of one type, so they can't have a different size. They must all be the same size.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Just trying to understand the out of memory exception in dot net.
If I create a infinite while loop and in the loop I create a new object and that object writes something to a file.
Will this application run out of memory? Will this cause out of memory exception?
An OutOfMemoryException is thrown whenever the application tries and fails to allocate memory to perform an operation. According to Microsoft's documentation, the following operations can potentially throw an OutOfMemoryException:
Boxing (i.e., wrapping a value type in an Object)
Creating an array
Creating an object
If you try to create an infinite number of objects, then it's pretty reasonable to assume that you're going to run out of memory sooner or later.
(Note: don't forget about the garbage collector. Depending on the lifetimes of the objects being created, it will delete some of them if it determines they're no longer in use.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Everyone knows that structs are value types and classes are reference types and that therefore structs are allocated on the stack and that objects are allocated on the heap.
What I'd like to know is what is the implication of something being allocated on the stack as opposed to something being allocated on the heap?
One general implication of allocating memory on the stack is that it is lost once it leaves scope (e.g., function/method returns). Heap memory can persist longer and need not worry about things like that.
Update:
Another important item that I didn't mention is that heap memory must be managed by someone. Depending on the language this can be the programmer (C, C++, etc.), a garbage collector (Java, C#, etc.). If heap memory isn't cleaned up when it's done being used you end up with memory leaks.