Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In your actual programming experience, how did this knowledge of STACK and HEAP actually rescue you in real life? Any story from the trenches? Or is this concept good for filling up programming books and good for theory?
The distinction in .NET between the semantics of reference types and value types, is a much more important concept to grasp.
Personally, I have never bothered thinking about the stack or heap in all my years of coding (just CLR based).
To me it is the difference between being a "developer/programmer" and a "craftsman". Anyone can learn to write code and see how things just "magically happen" for you not knowing why/how. To really be valuable at what you do, I think there is a great importance to find out as much as you can about the Framework you're using. Remember it's not just a language, it's a Framework that you leverage to create the best application to your abilities.
I've analyzed many memory dumps over the years and found it extremely helpful knowing the internals and differences between the two. Most of these have been OutOfMemory conditions and unstable applications. This knowledge is absolutely necessary to use WinDbg when looking at dumps. When investigating a memory dump, knowing how memory is allocated between the kernel/user-mode process and the CLR can at least tell you where to begin your analysis.
For example, let's take an OOM case:
The allocated memory you see in the Heap Sizes, Working Set, Private Memory, Shared Memory, Virtual Memory, Committed Memory, Handles, and Threads can be a big indicator of where to start.
There about 8 different heaps that the CLR uses:
Loader Heap: contains CLR structures and the type system
High Frequency Heap: statics, MethodTables, FieldDescs, interface map
Low Frequency Heap: EEClass, ClassLoader and lookup tables
Stub Heap: stubs for CAS, COM wrappers, P/Invoke
Large Object Heap: memory allocations that require more than 85k bytes
GC Heap: user allocated heap memory private to the app
JIT Code Heap: memory allocated by mscoreee (Execution Engine) and the JIT compiler for managed code
Process/Base Heap: interop/unmanaged allocations, native memory, etc
Finding what heap has high allocations can tell me if I have memory fragmentation, managed memory leaks, interop/unmanaged leaks, etc.
Knowing that you have 1MB (on x86)/ 4MB (on x64) of stack space allocated for each thread that your app uses reminds me that if I have 100 threads you will have an additional 100MB of virtual memory usage.
I had a client that had Citrix servers crashing with OutOfMemory problems, being unstable, slow responsiveness when their app was running on it in multiple sessions. After looking at the dump (I didn't have access to the server), I saw that there were over 700 threads being used by that instance of the app! Knowing the thread stack allocation, allowed me to correlate the OOMs were caused by the high thread usage.
In short, because of what I do for my "role", it is invaluable knowledge to have. Of course even if you're not debugging memory dumps it never hurts either!
It certainly is helpful to understand the distinction when one is building compilers.
Here are a few articles I've written about how various issues in memory management impact the design and implementation of the C# language and the CLR:
http://blogs.msdn.com/ericlippert/archive/tags/Memory+Management/default.aspx
I don't think it matters if you're just building average business applications, which I think most .NET programmers are.
The books I've seen just mention stack and heap in passing as if memorizing this fact is something of monumental importance.
Personally, this is one of the very few technical questions that I ask every person I'm going to hire.
I feel that it is critical to understanding how to use the .NET framework (and most other languages). I never hire somebody who doesn't have a clear understanding of memory usage on the stack vs. the heap.
Without understanding this, it's almost impossible to understand the garbage collector, understand .NET performance characteristics, and many other critical development issues.
The important distinction is between reference types and value types. It's not true that "value types go on the stack, reference types go on the heap". Jon Skeet has written about this and so has Eric Lippert.
We had a Claim Entity (business Object) which contained data for an entire claim. One of the requirements of the application was to create an audit trail of every single value changed by the user. In order to this without hitting the database twice we would maintain Original Claim Entity in the form and a Working Claim Entity. The Working Claim Entity would get updated when the user clicked Save and we would then compare the Original Claim Entity properties with corresponding Working Claim Entity properties to determine what changed. One day we noticed hey our compare method is never finding a difference. This is where my understanding of the Stack and Heap saved my rear end (specifically value types vs reference types). Because we needed to maintain to copies of the same object in memory the developer simply created two objects
Dim originalClaim As ClaimBE
Dim workingClaim As ClaimBE
then called the business layer method to return the claim object and assigned the same claimBE to both variables
originalClaim = BLL.GetClaim()
workingClaim = originalClaim
hence two reference types pointing to the same value type. Nightmare averted.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 16 days ago.
Improve this question
I am really confused about GC. My head is a mess when I am trying to explain this to somebody.
Do you guys have a guide or set of articles or a good book that explains GC in .NET?
I did program a little bit in C a long time ago, not much in C++ but mostly in C# all I can explain is using keyword for opening files and Dispose which I don't really know how to answer about its internals.
I googled and youtube'd about GC, all I get is empty terminology for interview to memorize which I don't want to memorize. I am looking firstly for a history of it, how it works in C, C++. I don't want random keywords managed heap, 3 generations, mutex, weak references, like fine I wanna know about it too but I need an order, reading about GC feels like a puzzle for me, I open like 10 tabs then I forgot where it started and its really frustrating.
I really need a dummy guide, from C to IDisposable, even if it takes 500 pages I am gonna go through it.
I know it might be asked a lot of times, it might be a dumb question, but I am never gonna learn if I don't ask for help. I googled myself, I get mostly interview responses which aren't followed by code and its like reading a med school book, I am never gonna memorize anything if it doesn't make sense.
A very high level summary -
Garbage collection is one of the key differentiators between managed languages like C# and unmanaged languages like C and C++.
Managed languages take care of allocating and deallocating memory for your data objects. Garbage collection is just the automatic freeing of memory when you don't need it anymore. C and C++ don't do this for you - you have to do it yourself, or else you will eventually run out of memory. Obviously folks have come up with strategies over the years for dealing with this (reference counters, etc.), but there's really no substitute for the automatic garbage collector of a managed language.
The truth is in C# you rarely have to worry about garbage collection. There are a handful of scenarios where you can accidentally step into pitfalls that prevent it from happening on some objects - we call these memory leaks - but that's a bigger topic.
The Wikipedia article on garbage collection is pretty decent if you want to try to get into the nitty gritty. Otherwise if you're just getting into C# or explaining C# to someone new to it I honestly wouldn't think about it in the beginning. That's sort of the point - it exists so that you don't have to think about it.
Also using and Dispose are actually not really related to garbage collection (maybe indirectly). In fact using and Dispose are closer to resource management strategies in unmanaged languages. That is to say they represent manual resource deallocation. But Dispose isn't supposed to be used for memory deallocation either except in rare circumstances. Rather it's supposed to be for cleaning up any other resources that might be in use, such as open files.
This is a list of books you might like
https://www.amazon.in/Memory-Management-Implementation-Programming-Development/dp/1556223471
Memory Management: Algorithms and Implementation in C/C++ (Windows Programming/Development)
https://www.amazon.com/Garbage-Collection-Algorithms-Automatic-Management/dp/0471941484
Garbage Collection: Algorithms for Automatic Dynamic Memory Management
You can find official documentation on GC in C# there
https://learn.microsoft.com/en-us/dotnet/standard/garbage-collection/
The information should be of some help to you, but I am not familiar with it much, and will learn more about it.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 months ago.
Improve this question
I have x86 WPF application, application displays quite a lot of data with a live streaming from RabbitMQ. Application starts at about 500 MB of memory, but during the day, since users load more and more data it may go up to 900-1000 MB. As soon as it hits around 900 MB threshold application becomes very slow, not that much responsive. For example, editing DevExpress grid takes time (I must admit, that each modification triggers plenty of LIVE actions, but it is all good when memory is below 900MB).
Users have high spec machines (i7 CPU, 128 GB RAM), plenty of free resources.
We diagnosed application for the memory leaks, CPU usage, and everything is OK. Grow from 500MB to 900MB is expected, since more data is loaded.
From what I understand x86 is limited to 2GB, but for me application is slow (+ starts throwing out of memory exceptions at around 900 MB).
What should we do? What needs to be checked?
If you can, switch to x64. by now there should be little reason for 32-bit code, unless you are using some really ancient libraries.
It also depend on what you are measuring, i.e. total process memory, or actual memory used. A memory profiler should give you the later. The garbage collector will need a bit more memory than is actually used, however 100% overhead seem a bit much.
Another possible reason is memory fragmentation. While small memory allocations will be automatically de fragmented/compacted, larger allocations (85kb+ last I checked) will be placed on the Large Object Heap (LOH), and this is not automatically compacted. This could lead to situations where there is plenty of memory available, but not any single "hole" large enough to fulfill the memory request. A good memory profiler should give you some idea about the degree of LOH fragmentation. See also The large object heap on Windows systems
The LOH can be manually compacted by running
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
But this is kind of a bandaid. Moving to x64 would be better, and using some kind of memory pool that allocates fixed size blocks to avoid fragmentation would probably be best.
As a rule of thumb, .Net works best when allocations are small and really short lived, or stay alive for the entire application lifetime. The former will be handled by the Gen0/1 (i.e. fast) collections, and the later placed in gen2/LOH, and never collected.
High frequency, large, variable sized allocations is probably the worst case scenario for the garbage collector/memory allocator.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I read this article: https://medium.com/microsoft-open-source-stories/how-microsoft-rewrote-its-c-compiler-in-c-and-made-it-open-source-4ebed5646f98
Since C# has a built in garbage collector, is Roslyn slower than the previous compiler which was written in C++? Did they perform any benchmarks?
Let me address a question that you didn't explicitly ask but applies to your question.
Question: Is explicit garbage collection faster than implicit garbage collection?
Answer: As you may already know C++/C uses explicit garbage collection which means that free() must be called to deallocate memory allocated on the heap. On the other hand, C# uses implicit garbage collection which means the memory on the heap is deallocated in the background. The key here is implicit garbage collection will deallocate memory when needed at optimal times while explicit will always deallocate each object individually(if done correctly). Implicit garbage collection achieves this by communicating with the OS and by using some other algorithms. In all, in most situations, implicit garbage collection will perform better than explicit due to the above explanation. For more info check out this post.
Answer To Your Question: Because I have not seen any bench marks myself, it is almost impossible to say if one would be faster than the other for sure. There are many other features than garbage collection which would effect the speed of each langauge implementation. To clarify, C# is a bytecode based language that uses the JIT(Just-In-Time) compiler. If I had to choose, I would choose the C++ implementation to be faster due to the JIT optimizations lacking in some cases compared to the C++ compiler. Again, when it comes to how fast these two languages will perform it will depend on the situation. For example, there are some optimizations that JIT can preform that are impossible to do with the C++ compiler.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have a service, that I query once in a very long while, and i would like to "streamline", or improve the efficiency, of its memory allocation.
Most of the time it just sits and waits, and once in awhile it gets a request that requires allocating a lot of memory, and do some processing on it. I don't know the types or structure in advance - it depends on the request, and varies wildly.
Now, the big processing request is precluded by some chatter (other requests), that might take a few seconds.
What I want to do is, when the chatter (smaller requests) start, say to the .Net Framework: go to windows, and get yourself a couple of GB's of memory so it'll be available faster when i ask, and when I'm done, say to the .Net: everything I'm not currently using, you can give back, because I'm not going to need it for a while.
I'm starting profiling as we speak... but I suspect it would be part of the issues that could improve.
I'll try to clarify the situation.
I have a service that sits on a server and 95% of the time just does nothing. Once in a long while it gets a request to do some mostly memory intensive processing.
I know a little bit of time in advance that it's all going to happen.
All i want to do, is hint the GC "Were going to need a lot of memory soon" and later "Were not going to need anything special for a while"
OK.
I've done profiling, and decided I don't care about this.
The allocation does take some time (several to several dozens milliseconds), but it's insignificant versus the rest of the processing...
regarding the releasing part, it happens eventually, and doesn't really interfere with the rest of the server...
If you want to be able to reserve a chunk of memory for your uses then please see:
allocating "unmanaged" memory in c#
Note, doing so can be risky and the Garbage Collector and memory allocation in the .NET VM is already rather good.
If the memory allocation can be largely cached then I'd recommend caching what can be done so with WeakReference such that quick successive requests could benefit from accessing the cached data, but if a garbage collection comes in between requests spaced a decent amount apart then the data can be released and just re-created in the next request.
See: Weak reference benefits
And: http://msdn.microsoft.com/en-gb/library/system.weakreference.aspx
The GC is most of the time smart enough to do this for you. However, it is an architectural problem and can be dealt with by modifying the flow of activities in the service.
e.g. you can allocate the objects required for processing big request in advance before the request comes. However, for deallocating, either explicitly implement idisposible interface to them and destroy them once they are used or leave it to GC.
Further, you have to understand how memory allocation works. In order to get memory allocated for .Net objects, you must be knowing the type of object in advance. Just allocating plain block of memory is in no way helpful to you. Most of the time the object creation or cloning code is more resource consuming compared to the mallocs that the framework uses to allocate memory for the object.
Considering the details, I would say that even if you could successfully do this routine, it would become much more complex and might add few more bugs to your code. Better leave this to .Net framework. It is very good at allocating and deallocating memory.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When is it acceptable to call GC.Collect?
Is Using of GC.Collect Method necessary and is it a good practice?
If yes, when should I use this method?
If No, Why?
Edit: Accourding to MSDN What is the meaning of: Use this method to try to reclaim all memory that is inaccessible.
As a general rule... don't use it.
GC.Collect force a collection by the garbage collector, which, among other things, means pausing all the threads of the program so that the garbage collector can verify which objects are no longer referenced and claim the unused memory.
Usually the GC will automatically decide when he should collect memory, considering when your program is idle or if the allocated memory (virtual memory) is getting low so in order to allocate more it needs to free some. In my experience the .NET GC (as in Microsoft) is pretty intelligent and does what it does ratter well. Other GCs (as in mono) I don't have experience with them, but still probably will do a better job than the developer deciding when to perform a collection.
That, of course, has a good impact in performance and, as with many other situations, it knows better than you do the best time to perform a collection (in 99% of the cases that's true). So no, it's not a good practice and you should only do it when you have a really good reason to do it... a deep understanding of what it does and the possible consequences it may have.
Not generally recommended; not generally considered good practice.
In general the GC knows better than you what needs cleaning up, and when. Best practice is to leave the GC alone to do its business unhindered unless you have concrete proof that the GC's strategy is causing problems that can be solved by forcing it to behave differently.
It is usually not necessary, and usually not a good idea to use it.
There are times when you need to use it but usually you are forcing a collect at a not optimal time, which have a performance hit.
There is good discussion on the matter here and here.
I have explained a few things relevant here:
WPF memory leak