Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 months ago.
Improve this question
I have x86 WPF application, application displays quite a lot of data with a live streaming from RabbitMQ. Application starts at about 500 MB of memory, but during the day, since users load more and more data it may go up to 900-1000 MB. As soon as it hits around 900 MB threshold application becomes very slow, not that much responsive. For example, editing DevExpress grid takes time (I must admit, that each modification triggers plenty of LIVE actions, but it is all good when memory is below 900MB).
Users have high spec machines (i7 CPU, 128 GB RAM), plenty of free resources.
We diagnosed application for the memory leaks, CPU usage, and everything is OK. Grow from 500MB to 900MB is expected, since more data is loaded.
From what I understand x86 is limited to 2GB, but for me application is slow (+ starts throwing out of memory exceptions at around 900 MB).
What should we do? What needs to be checked?
If you can, switch to x64. by now there should be little reason for 32-bit code, unless you are using some really ancient libraries.
It also depend on what you are measuring, i.e. total process memory, or actual memory used. A memory profiler should give you the later. The garbage collector will need a bit more memory than is actually used, however 100% overhead seem a bit much.
Another possible reason is memory fragmentation. While small memory allocations will be automatically de fragmented/compacted, larger allocations (85kb+ last I checked) will be placed on the Large Object Heap (LOH), and this is not automatically compacted. This could lead to situations where there is plenty of memory available, but not any single "hole" large enough to fulfill the memory request. A good memory profiler should give you some idea about the degree of LOH fragmentation. See also The large object heap on Windows systems
The LOH can be manually compacted by running
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
But this is kind of a bandaid. Moving to x64 would be better, and using some kind of memory pool that allocates fixed size blocks to avoid fragmentation would probably be best.
As a rule of thumb, .Net works best when allocations are small and really short lived, or stay alive for the entire application lifetime. The former will be handled by the Gen0/1 (i.e. fast) collections, and the later placed in gen2/LOH, and never collected.
High frequency, large, variable sized allocations is probably the worst case scenario for the garbage collector/memory allocator.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am using Asp.net with C# to re-ranking a collection image based on their content. But, since I run it, I obtained the following error. While my laptop has 4 GB RAM an 320 GB Hard Disk.
exception of type 'system.outofmemoryexception' was thrown
How can increase RAM for running my program?
It is nearly impossible to give you a good answer without seeing some code, but odds are that you are not actually running out of memory.
GDI will throw an OutOfMemoryException for many problems that are not related to memory at all. It can happen when you try to process a file that isn't actually an image, when the file is corrupt, or when it is an image format that GDI doesn't support.
First, check to make sure that every file or data stream you are processing is actually a real image file. If you are absolutely sure that the files are valid, and the format is supported by GDI, only then would I start looking at actual memory problems.
Two options -
1) there's a bug in your code and it isn't freeing things up.
2) 4GB of RAM isn't a lot.
Visual Studio will use as much memory as the laptop has. But you can "extend" it by enabling Virtual Memory - which I suspect is disabled on your computer. Virtual Memory (aka paging file) allows the Operating System to use diskspace.
However, it will be slow because RAM is written/read to disk as needed. You laptop is probably already slow enough.
Your best bet is to buy more RAM for your laptop. 8GB would be good enough (it is what I have on my "play around" laptop) --- 16GB is even better!
To enable virtual memory on Windows 7 - Open the System Properties (search for it or press the and Pause button). Select tab "Advanced" and open Performance Settings. Next select "Advanced" tab and press "Change..." Automatically Manage Paging and/or "System managed size" ("No paging file" disables Virtual Memory).
It is disabled by default because of the performance impact. Your PC will be slower because it reads/writes to disk...which is many times slower than Memory (RAM). But it works.
If you can - Buy more memory. You'll be happier.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm using Visual Studio 2008 to work on a Winform / WPF project.
It uses multiple projects and classes to build it into a working product.
My problem is, we have noticed that there is a 4-8k per second leak in the memory usage. granted it is a small leak, but it is non-stop continuous 4-8k. Our application runs over night and even for a few days at time. When those few days comes alone, this thing has eaten up more memory than the computer can handle (usually 2-3 gigs) and a force restart on the pc is the only solution. This leak occurs even while nothing is happening except network communications with our host.
After further analysis on the project through ANTS Memory Profiler, we have discovered that the Private bytes data is continuously growing. Is there any way to tell where this private data is being created from? I haven't had much luck tracking this down with ANTS. Steps would help greatly!
Image of the private bytes increasing (~45 minutes):
Image of the Time line growth (~45 minutes):
Thanks in advance!
If the private bytes keep increasing, it means you have a memory leak. Try DebugDiag, it is from MS and free, also a very good tool to tracking memory leak on Windows.
Use this tool is simple, first you create a rule to monitor your process with DebugDiag collection, it will create memory dump according to your rule, you can create the memory dump manually. Then you can use DebugDiag Analysis to analysis the dump, please set the right symbol path before analysis.
This MSDN article Identify And Prevent Memory Leaks In Managed Code might help too. This article points our how to find out if the memory leak is a native one or managed one. If it is a purely .NET manage leak, you can also use CLR profiler to debug the problem.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have a service, that I query once in a very long while, and i would like to "streamline", or improve the efficiency, of its memory allocation.
Most of the time it just sits and waits, and once in awhile it gets a request that requires allocating a lot of memory, and do some processing on it. I don't know the types or structure in advance - it depends on the request, and varies wildly.
Now, the big processing request is precluded by some chatter (other requests), that might take a few seconds.
What I want to do is, when the chatter (smaller requests) start, say to the .Net Framework: go to windows, and get yourself a couple of GB's of memory so it'll be available faster when i ask, and when I'm done, say to the .Net: everything I'm not currently using, you can give back, because I'm not going to need it for a while.
I'm starting profiling as we speak... but I suspect it would be part of the issues that could improve.
I'll try to clarify the situation.
I have a service that sits on a server and 95% of the time just does nothing. Once in a long while it gets a request to do some mostly memory intensive processing.
I know a little bit of time in advance that it's all going to happen.
All i want to do, is hint the GC "Were going to need a lot of memory soon" and later "Were not going to need anything special for a while"
OK.
I've done profiling, and decided I don't care about this.
The allocation does take some time (several to several dozens milliseconds), but it's insignificant versus the rest of the processing...
regarding the releasing part, it happens eventually, and doesn't really interfere with the rest of the server...
If you want to be able to reserve a chunk of memory for your uses then please see:
allocating "unmanaged" memory in c#
Note, doing so can be risky and the Garbage Collector and memory allocation in the .NET VM is already rather good.
If the memory allocation can be largely cached then I'd recommend caching what can be done so with WeakReference such that quick successive requests could benefit from accessing the cached data, but if a garbage collection comes in between requests spaced a decent amount apart then the data can be released and just re-created in the next request.
See: Weak reference benefits
And: http://msdn.microsoft.com/en-gb/library/system.weakreference.aspx
The GC is most of the time smart enough to do this for you. However, it is an architectural problem and can be dealt with by modifying the flow of activities in the service.
e.g. you can allocate the objects required for processing big request in advance before the request comes. However, for deallocating, either explicitly implement idisposible interface to them and destroy them once they are used or leave it to GC.
Further, you have to understand how memory allocation works. In order to get memory allocated for .Net objects, you must be knowing the type of object in advance. Just allocating plain block of memory is in no way helpful to you. Most of the time the object creation or cloning code is more resource consuming compared to the mallocs that the framework uses to allocate memory for the object.
Considering the details, I would say that even if you could successfully do this routine, it would become much more complex and might add few more bugs to your code. Better leave this to .Net framework. It is very good at allocating and deallocating memory.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Maximum number of threads in a .NET app?
Is there a limit on the number of threads we can create in a .NET application ?
I am assuming that the number of threads that can be created is limited by the amount of memory available since the threads' stack needs to be allocated. Please correct me if I am wrong. Are there other factors that limits the number of threads ? Or, is the number of threads limited to a specific number ?
How can I (roughly) calculate the maximum number of threads that can be created on a machine, if I know the machine specifications ?
As always, Raymond Chen has the answer on his blog. Note that his test appears to have been run using unmanaged code. My guess is that there's nothing in the .NET framework that actually limits the number of threads per process and that the limit would be enforced by the O/S. If that's truly the case then his test is still valid.
Also, I'm not sure if it's different between 32-bit and 64-bit machines, I would imagine his results are dependent on RAM size and 32bit/64bit CPU along with possibly the number of CPUs. All that said, it looks like he was able to get 13000 threads created.
The big issue with 13k threads running is that the time spent context switching is sure to eat up all the available cpu and you're likely to be getting little to no work done.
If the application you're looking into is creating a lot of threads that are supposed to be doing intense work, you might not be getting a process hang as much as running into issues with the amount of context switching taking place. Obviously the most common issue in a multi-threaded application is a resource deadlock, but there are many tools available to troubleshoot that scenario.
Try the following links about deadlocks to determine if that's what you're actually running into:
Avoiding and Detecting Deadlocks
Deadlock Monitor
Stack Overflow Deadlock Question
Stack Overflow Deadlock Debug Question
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In your actual programming experience, how did this knowledge of STACK and HEAP actually rescue you in real life? Any story from the trenches? Or is this concept good for filling up programming books and good for theory?
The distinction in .NET between the semantics of reference types and value types, is a much more important concept to grasp.
Personally, I have never bothered thinking about the stack or heap in all my years of coding (just CLR based).
To me it is the difference between being a "developer/programmer" and a "craftsman". Anyone can learn to write code and see how things just "magically happen" for you not knowing why/how. To really be valuable at what you do, I think there is a great importance to find out as much as you can about the Framework you're using. Remember it's not just a language, it's a Framework that you leverage to create the best application to your abilities.
I've analyzed many memory dumps over the years and found it extremely helpful knowing the internals and differences between the two. Most of these have been OutOfMemory conditions and unstable applications. This knowledge is absolutely necessary to use WinDbg when looking at dumps. When investigating a memory dump, knowing how memory is allocated between the kernel/user-mode process and the CLR can at least tell you where to begin your analysis.
For example, let's take an OOM case:
The allocated memory you see in the Heap Sizes, Working Set, Private Memory, Shared Memory, Virtual Memory, Committed Memory, Handles, and Threads can be a big indicator of where to start.
There about 8 different heaps that the CLR uses:
Loader Heap: contains CLR structures and the type system
High Frequency Heap: statics, MethodTables, FieldDescs, interface map
Low Frequency Heap: EEClass, ClassLoader and lookup tables
Stub Heap: stubs for CAS, COM wrappers, P/Invoke
Large Object Heap: memory allocations that require more than 85k bytes
GC Heap: user allocated heap memory private to the app
JIT Code Heap: memory allocated by mscoreee (Execution Engine) and the JIT compiler for managed code
Process/Base Heap: interop/unmanaged allocations, native memory, etc
Finding what heap has high allocations can tell me if I have memory fragmentation, managed memory leaks, interop/unmanaged leaks, etc.
Knowing that you have 1MB (on x86)/ 4MB (on x64) of stack space allocated for each thread that your app uses reminds me that if I have 100 threads you will have an additional 100MB of virtual memory usage.
I had a client that had Citrix servers crashing with OutOfMemory problems, being unstable, slow responsiveness when their app was running on it in multiple sessions. After looking at the dump (I didn't have access to the server), I saw that there were over 700 threads being used by that instance of the app! Knowing the thread stack allocation, allowed me to correlate the OOMs were caused by the high thread usage.
In short, because of what I do for my "role", it is invaluable knowledge to have. Of course even if you're not debugging memory dumps it never hurts either!
It certainly is helpful to understand the distinction when one is building compilers.
Here are a few articles I've written about how various issues in memory management impact the design and implementation of the C# language and the CLR:
http://blogs.msdn.com/ericlippert/archive/tags/Memory+Management/default.aspx
I don't think it matters if you're just building average business applications, which I think most .NET programmers are.
The books I've seen just mention stack and heap in passing as if memorizing this fact is something of monumental importance.
Personally, this is one of the very few technical questions that I ask every person I'm going to hire.
I feel that it is critical to understanding how to use the .NET framework (and most other languages). I never hire somebody who doesn't have a clear understanding of memory usage on the stack vs. the heap.
Without understanding this, it's almost impossible to understand the garbage collector, understand .NET performance characteristics, and many other critical development issues.
The important distinction is between reference types and value types. It's not true that "value types go on the stack, reference types go on the heap". Jon Skeet has written about this and so has Eric Lippert.
We had a Claim Entity (business Object) which contained data for an entire claim. One of the requirements of the application was to create an audit trail of every single value changed by the user. In order to this without hitting the database twice we would maintain Original Claim Entity in the form and a Working Claim Entity. The Working Claim Entity would get updated when the user clicked Save and we would then compare the Original Claim Entity properties with corresponding Working Claim Entity properties to determine what changed. One day we noticed hey our compare method is never finding a difference. This is where my understanding of the Stack and Heap saved my rear end (specifically value types vs reference types). Because we needed to maintain to copies of the same object in memory the developer simply created two objects
Dim originalClaim As ClaimBE
Dim workingClaim As ClaimBE
then called the business layer method to return the claim object and assigned the same claimBE to both variables
originalClaim = BLL.GetClaim()
workingClaim = originalClaim
hence two reference types pointing to the same value type. Nightmare averted.