Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am currenly analyzing my code and application for having resource leakage. How do I monitor a C# process for currently running threads?
There's already a tool present, Parallel Stacks. To open it, Click on
Debug -> Windows -> Parallel Stacks
You can have a quick view on how to work with Parallel Stacks at MSDN.
This is the approach I use when I need to locate a leak:
Open a memory profiler.
I use perfmon.
This article has some material about setting perfmon and #fmunkert also explains it rather well.
Locate an area in the code that you suspect that it is likely that the leak is in that area. This part is mostly depending on you having good guesses about the part of the code that is responsible for the issue.
Push the Leak to the extreme: Use labels and "goto" for isolating an area / function and repeat the suspicious code many times (a loop will work to. I find goto more convenient for this matter).
In the loop I have used a breakpoint that halted every 50 hits for examining the delta in the memory usage. Of course you can change the value to feet a noticeable leak change in your application.
If you have located the area that causes the leak, the memory usage should rapidly spike.
If the Memory usage does not spike, repeat stages 1-4 with another area of code that you suspect being the root cause. If it does, continue to 6.
In the area you have found to be the cause, use same technique (goto + labels) to zoom in and isolate smaller parts of the area until you find the source of the leak (Please don't downvote me for the recursive step... :0) ).
Please note that the down sides of this method are:
If you are allocating an object in the loop, it's disposal should be also contained in the loop.
If you have more than one source of leak, It makes it harder to spot (yet still possible)
Good luck...
If you have visual studio 2013 You can download the Microsoft Concurrency Visualizer for Visual Studio:
https://msdn.microsoft.com/en-us/library/dd537632.aspx
It gives great insight in the application and threads currently running.
Synchronization, Sleeping, Blocking etc etc.
Next to that you can also download the extension (found on the same page)
In my opinion a great tool (and best of all free)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
2 days back we released new build of an existing asp.net 2.0 application, which got converted to 4.5 now. now we are seeing suddenly app pool for this asp.net app is consuming heavy memory on production server, more than 30 gb i guess. Code wise there is nothing which this app was doing earlier or have got added now with this new release which shall consume anything heavy, also doesn't have any file upload/downloads, heavy caching nothing.
As i am not able to find anything in code which can cause this so i am in need to some profiler which can tell few details on server.
So is there any free or open source tool which can help us to find out details like all session data with size that is stored on server for this app pool, and any other details which can help to know why this specific app pool is taking this much memory.
Apart from tool if any other directions/suggestions, that would be helpful as well. thanks...
Those kind of issues often can be traced via WinDbg debugger tool from Microsoft which is free.
First of all, you should create dump file of your current w3wp process.
https://msdn.microsoft.com/en-us/library/d5zhxt22.aspx
After that, you'll be able to load current state of that process into WinDbg.
https://developer.microsoft.com/en-us/windows/hardware/windows-driver-kit
https://blogs.msdn.microsoft.com/jankrivanek/2012/11/15/setting-up-managed-code-debugging-with-sos-and-sosex/
https://theartofdev.com/windbg-cheat-sheet/
http://windbg.info/
Keep in mind that WinDbg is low-level tool, so you need to spend some time to learn and get used to it.
Example of usage:
Create process dump (*.DMP) via Task Manager. Located in C:\Users\{username}\AppData\Local\Temp folder.
Open WinDbg (x64) -> Open Crash Dump -> Select created *.DMP
After that you need to setup symbols:
.symfix
.reload
Next, you should load .net runtime:
.loadby sos clr
You could get exception if server and your machine doesn't have same version of .net clr (What to do with "The version of SOS does not match the version of CLR you are debugging" in WinDbg?). Test it with !clrstack command.
If you want to load additional module with extended commands (http://www.stevestechspot.com/)
.load PathToFile\sosex.dll
!sosex.help
Now you have everything in place and you can start to analyze memory heap, threads, locks etc....
You could also find extremely helpful information, tips and tricks on a blog by Tess Ferrandez https://blogs.msdn.microsoft.com/tess/tag/debugging/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am using Asp.net with C# to re-ranking a collection image based on their content. But, since I run it, I obtained the following error. While my laptop has 4 GB RAM an 320 GB Hard Disk.
exception of type 'system.outofmemoryexception' was thrown
How can increase RAM for running my program?
It is nearly impossible to give you a good answer without seeing some code, but odds are that you are not actually running out of memory.
GDI will throw an OutOfMemoryException for many problems that are not related to memory at all. It can happen when you try to process a file that isn't actually an image, when the file is corrupt, or when it is an image format that GDI doesn't support.
First, check to make sure that every file or data stream you are processing is actually a real image file. If you are absolutely sure that the files are valid, and the format is supported by GDI, only then would I start looking at actual memory problems.
Two options -
1) there's a bug in your code and it isn't freeing things up.
2) 4GB of RAM isn't a lot.
Visual Studio will use as much memory as the laptop has. But you can "extend" it by enabling Virtual Memory - which I suspect is disabled on your computer. Virtual Memory (aka paging file) allows the Operating System to use diskspace.
However, it will be slow because RAM is written/read to disk as needed. You laptop is probably already slow enough.
Your best bet is to buy more RAM for your laptop. 8GB would be good enough (it is what I have on my "play around" laptop) --- 16GB is even better!
To enable virtual memory on Windows 7 - Open the System Properties (search for it or press the and Pause button). Select tab "Advanced" and open Performance Settings. Next select "Advanced" tab and press "Change..." Automatically Manage Paging and/or "System managed size" ("No paging file" disables Virtual Memory).
It is disabled by default because of the performance impact. Your PC will be slower because it reads/writes to disk...which is many times slower than Memory (RAM). But it works.
If you can - Buy more memory. You'll be happier.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm using Visual Studio 2008 to work on a Winform / WPF project.
It uses multiple projects and classes to build it into a working product.
My problem is, we have noticed that there is a 4-8k per second leak in the memory usage. granted it is a small leak, but it is non-stop continuous 4-8k. Our application runs over night and even for a few days at time. When those few days comes alone, this thing has eaten up more memory than the computer can handle (usually 2-3 gigs) and a force restart on the pc is the only solution. This leak occurs even while nothing is happening except network communications with our host.
After further analysis on the project through ANTS Memory Profiler, we have discovered that the Private bytes data is continuously growing. Is there any way to tell where this private data is being created from? I haven't had much luck tracking this down with ANTS. Steps would help greatly!
Image of the private bytes increasing (~45 minutes):
Image of the Time line growth (~45 minutes):
Thanks in advance!
If the private bytes keep increasing, it means you have a memory leak. Try DebugDiag, it is from MS and free, also a very good tool to tracking memory leak on Windows.
Use this tool is simple, first you create a rule to monitor your process with DebugDiag collection, it will create memory dump according to your rule, you can create the memory dump manually. Then you can use DebugDiag Analysis to analysis the dump, please set the right symbol path before analysis.
This MSDN article Identify And Prevent Memory Leaks In Managed Code might help too. This article points our how to find out if the memory leak is a native one or managed one. If it is a purely .NET manage leak, you can also use CLR profiler to debug the problem.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
This is a really general learning-based question, not a technical problem.
I'm making a game in Unity. The game involves a lot of pretty complex, high-object-count processes, and as such, I generate a lot of garbage. As everyone who makes games that use managed code knows, garbage collection spikes are a HUGE buzzkill. I am also trying to target mobile devices, so this pain is amplified.
My goal is simple: track down the procedures that generate the most garbage and pool and reuse the objects involved in order to reduce the load on the garbage collector. I have tackled all of the obvious classes, but my GC issues persist. If anyone can offer some greater wisdom here, please bring it on.
What I have been unable to track down in my research is a good way to measure GC load and track down problem spots in the code itself. I can get metrics from Unity and from diagnostics classes about the size of the memory pool (though I'm no expert on what any of those numbers really mean) and I can display the total number of GC collections that have happened since the start, but my knowledge of debugging this issue kind of ends there.
I know there MUST be better ways to solve this problem. Can anyone point me at the right tools or framework classes (or libraries?) that can help me out here? I've seen some mention of a debugger program (sgen?) that's part of Mono, but I can't find a download or let alone a guide for how to hook it up to a Unity game.
Any help would be greatly appreciated. Thanks.
For C# and Unity3D specifically, the profiler built into Unity3D is the best tool you'll have available to you. You should be able to get very far with the Unity Profiler. Specifically the CPU Profiler:
I have highlighted the most important column - GC Alloc - Which, on a frame-by-frame basis, shows memory that has been allocated for garbage collection. The aim of the game here is to make that column as consistently close to zero as possible.
I'd suggest spending a few hours in there, playing your game, pausing it and digging into areas of your code where GC Alloc is showing numbers.
Finally, I strongly suggest you check out this video tutorial by one of the developers of Unity on memory debugging and using the Unity Editor profiler properly.
Caveat: it's all down to details: general principles are nice but you're right - real data is the right thing. Unfortunately it's hard to get for Unity: I've seen reports that DotTrace works for some aspects of Unity games but that most of Unity looks like a black box to it.
A useful rule of thumb is to look for new statements. It's a rule of thumb, not a 100% scientific principle (structs are created with new but they go on the stack... mostly...) but it's a good leading indicator of possible causes of fragmentation. If you "new" something up and let if just disappear ( go out of scope, be dereferenced, etc) it will end up in the garbage collector. Moreover if you new up a batch of items and dereference some of them you'll also fragment the heap which makes the collection spikes worse as the collector tries to defragment the memory.
This is why most recommendations in this space will be to use pools, as you're already doing.
Here are some general purpose links that might be useful in tackling memory management in Unity:
http://www.gamasutra.com/blogs/WendelinReich/20131109/203841/C_Memory_Management_for_Unity_Developers_part_1_of_3.php
http://andrewfray.wordpress.com/2013/02/04/reducing-memory-usage-in-unity-c-and-netmono/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm not sure if this kind of question is appropriate, but its been suggested I ask here, so here goes.
For a subject at university this semester our assignment is to take some existing code and parallelize it. We've got a bit of an open end on it, but open source is really the only way we are going to get existing code. I could write some code and then parallelize it, but existing code (and perhaps one where I could make a genuine contribution) would be best, to avoid doubling my workload for little benefit.
I was hoping to use C# and make use of the new Task parallel library, but I'm struggling to find some C# open source projects that are computationally expensive enough to make use of parallelization (and don't already have it).
Does anyone have some suggestions of where to look? Or is C# just not going to have enough of that kind of thing as open source (should perhaps try C++?)
I don't know if they already use parallel tasks, but good candidates are image manipulation programs, such as paint.net or pinta.
I don't know the scope of this project (if it's just a weekly assignment or your final project), but a process that benefits from parallelization does not have to be "embarassingly parallel" as Hans' linked article describes. A problem will benefit from being parallelized if:
The solution to the problem can be expressed as the "sum" of a repetitive series of smaller operations,
The smaller operations have minimal effect on each other, and
The scale of the problem is sufficient to make the benefits of parallelization greater than the loss due to the added overhead of creating and supervising multiple worker processes.
Examples of problems that are often solved linearly, but can benefit from parallelization include:
Sorting. Some algorithms like MergeSort are atomic enough to parallelize; others like QuickSort are not.
Searching. BinarySearch cannot be parallelized, but if you're searching unordered data like a document for one or more occurrences of words, linear searches can use "divide and conquer" optimizations.
Data transformation workflows. Open a file, read its raw data, carve it up into domain fields, turn those domain fields into true domain objects, validate them, and persist them. Each data file is often totally independent of all others, and the process of transformation (which is everything between reading the file and persisting it) is often a bottleneck that benefits from having more processors thrown at it.
Constraint satisfaction problems. Given a series of business rules defining relationships and constraints of multiple variables in a problem space, find a set of those variables that meets all constraints, or determine that there are none. Common applications include transportation route scheduling and business process optimization. This is an evolving sector of computational algorithms, of relatively high academic interest, and so you may find published public-domain code of a basic CSP algorithm you can multithread. It may be described as embarassingly parallel as the best-known solution is "intelligent brute force", but nonetheless, a possible solution can be evaluated independently of others and so each can be given to a worker thread.
Processes defined as "embarassingly parallel" are generally any problems sufficiently large in scale, yet atomic and repetitive, that parallel processing is the only feasible solution. The Wiki article Hans links to mentions common applications; they usually boil down, in general, to the application of a relatively simple computation to each element of a very large domain of data.
Check out Alglib, especially the open source C# edition. It will contain a lot of matrix and array manipulations that will be nicely suitable for TPL.
Project Bouncycastle implements several encryption algorithms in C# and java. Perhaps some of them are not as parallelized as they could be.