Data Protection in .Net - c#

I am getting this question from our clients where they are saying if we do Copy-Paste or store data in a variable, then there are chances where data can be hacked where a hacker can get the data from RAM and use it before GC disposes of it.
We generally don't dispose string objects where it gets stored in heap memory and will be collected by GC when it flushes the memory.
This is what I get about GC
The memory that is used by allocated objects on the managed heap surpasses an acceptable threshold. This threshold is continuously
adjusted as the process runs. The GC.Collect method is called. In
almost all cases, you do not have to call this method, because the
garbage collector runs continuously
Is it possible where any hacker can get into RAM and read the data from it before GC flushes it? If yes, then how can we overcome it.

If the hacker can read memory in your process, the unpredictable lifetime of objects due to GC are the least of your problems. Any language is vulnerable to this kind of issue as computers effectively manipulate all data in memory (whether it's in a GC-able heap or elsewhere - C and assembly language need to store the data in memory too).
Technologies exist (like Intel SGX) that try to overcome this issue, but it too has exploits. Fundamentally, no software only solution can stop bad folks once they can read your memory.

I agree with the comments regarding the futility of trying to safeguard data in memory if an attacker already has the ability to read process memory entirely.
That said many attackers will be attacking via exploits that allow imperfect access to subsections of system memory, meaning use of SecureString is still of practical utility.
I recommend reading this thread for a discussion of the applications and limitations: When would I need a SecureString in .NET?

Related

Weird out of memory exceptions [duplicate]

If you application is such that it has to do lot of allocation/de-allocation of large size objects (>85000 Bytes), its eventually will cause memory fragmentation and you application will throw an Out of memory exception.
Is there any solution to this problem or is it a limitation of CLR memory management?
Unfortunately, all the info I've ever seen only suggests managing risk factors yourself: reuse large objects, allocate them at the beginning, make sure they're of sizes that are multiples of each other, use alternative data structures (lists, trees) instead of arrays. That just gave me an another idea of creating a non-fragmenting List that instead of one large array, splits into smaller ones. Arrays / Lists seem to be the most frequent culprits IME.
Here's an MSDN magazine article about it:
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx, but there isn't that much useful in it.
The thing about large objects in the CLR's Garbage Collector is that they are managed in a different heap.
The garbage collector uses a mechanism called "Compacting", which is basically fragmentation and re-linkage of objects in the regular heap.
The thing is, since "compacting" large objects (copying and re-linking them) is an expensive procedure, the GC provides a different heap for them, which is never being compacted.
Note also that memory allocation is contiguous. Meaning if you allocate Object #1 and then Object #2, Object #2 will always be placed after Object #1.
This is probably what's causing you to get OutOfMemoryExceptions.
I would suggest having a look at design patterns like Flyweight, Lazy Initialization and Object Pool.
You could also force GC collection, if you're suspecting that some of those large objects are already dead and have not been collected due to flaws in your flow of control, causing them to reach higher generations just before being ready for collection.
A program always bombs on OOM because it is asking for a chunk of memory that's too large, never because it completely exhausted all virtual memory address space. You could argue that's a problem with the LOH getting fragmented, it is just as easy to argue that the program is using too much virtual memory.
Once a program goes beyond allocating half the addressable virtual memory (a gigabyte), it is really time to either consider making its code smarter so it doesn't gobble so much memory. Or making a 64-bit operating system a prerequisite. The latter is always cheaper. It doesn't come out of your pocket either.
Is there any solution to this problem or is it a limitation of CLR memory management?
There is no solution besides reconsidering your design. And it is not a problem of the CLR. Note, the problem is the same for unmanaged applications. It is given by the fact, that too much memory is used by the application at the same time and in segments laying 'disadvantageous' out in memory. If some external culprit has to be pointed at nevertheless, I would rather point at the OS memory manager, which (of course) does not compact its vm address space.
The CLR manages free regions of the LOH in a free list. This in most cases is the best what can be done against fragmentation. But since for really large objects, the number of objects per LOH segment decreases - we eventually end up having only one object per segment. And where those objects are positioned in the vm space is completely up to the memory manager of the OS. This means, the fragmentation mostly happens on the OS level - not on the CLR. This is an often overseen aspect of heap fragmentation and it is not .NET to blame for it. (But it is also true, fragmentation can also occour on the managed side like nicely demonstrated in that article.)
Common solutions have been named already: reuse your large objects. I up to now was not confronted with any situation, where this could not be done by proper design. However, it can be tricky sometimes and therefore may be expensive though.
We were precessing images in multiple threads. With images being large enough, this also caused OutOfMemory exceptions due to memory fragmentation. We tried to solve the problem by using unsafe memory and pre-allocating heap for every thread. Unfortunately, this didn't help completely since we relied on several libraries: we were able to solve the problem in our code, but not 3rd party.
Eventually we replaced threads with processes and let operating system do the hard work. Operating systems have long ago built a solution for memory fragmentation, so it's unwise to ignore it.
I have seen in a different answer that the LOH can shrink in size:
Large Arrays, and LOH Fragmentation. What is the accepted convention?
"
...
Now, having said that, the LOH can shrink in size if the area at its end is completely free of live objects, so the only problem is if you leave objects in there for a long time (e.g. the duration of the application).
...
"
Other then that you can make your program run with extended memory up to 3GB on 32bit system and up to 4 GB on 64bit system.
Just add the flag /LARGEADDRESSAWARE in your linker or this post build event:
call "$(DevEnvDir)..\tools\vsvars32.bat"
editbin /LARGEADDRESSAWARE "$(TargetPath)"
In the end if you are planning to run the program for a long time with lots of large objects you will have to optimize the memory usage and you might even have to reuse allocated objects to avoid garbage collector which is similar in concept, to working with real time systems.

.NET Free memory usage (how to prevent overallocation / release memory to the OS)

I'm currently working on a website that makes large use of cached data to avoid roundtrips.
At startup we get a "large" graph (hundreds of thouthands of different kinds of objects).
Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization)
I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report
Now what we can gather from this report is that:
1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS)
2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation)
3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code.
So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release.
Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory.
Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run.
Thanks for your help!
A few notes i forgot:
1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile.
2) These are results of a release build, so debug build can not be the issue.
3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)
Objects allocated on the large object heap (objects >= 85,000 bytes, normally arrays) are not compacted by the garbage collector. Microsoft decided that the cost of moving those objects around would be too high.
The recommendation is to reuse large objects if possible to avoid
fragmentation on the managed heap and the VM space.
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
I'm assuming that your large objects are temporary byte arrays created by your deserialization library. If the library allows you to supply your own byte arrays, you could preallocate them at the start of the program and then reuse them.
I know this isn't the answer you'd like to hear, but you can't forcefully release the memory back to the OS. However, for what reason do you want to do so? .NET will free its heap back to the OS once you're running low on physical memory. But if there's an ample amount of free physical memory, .NET will keep its heap to make future allocation of objects faster. If you really wanted to force .NET to release its heap back to the OS, I suppose you could write a C program which just mallocs until it runs out of memory. This should cause the OS to signal .NET to free its unused portion of the heap.
It's better that unused memory be reseved for .NET so that your application will have better allocation performance (since the runtime knows what memory is free and what isn't, allocation can just use the free memory without having to syscall into the OS to get more memory).
The garbage collector is in charge of defragmenting the heap. Every so often (usually during collection runs), it will move objects around the heap if it determines this needs to be done. (This is why C++/CLI has the pin_ptr construct for "pinning" objects).
Fragmentation usually isn't a big issue though with memory, since it provides fast random access.
As for your OutOfMemoryException, I don't have a good answer for. Ordinarily I'd suspect that your old object graph isn't being collected (some object somewhere is holding a reference onto it, a "memory leak"). But since you're using a profiler, I don't know then.
As of .NET 4.5.1 you can set a one-time flag to compact LOH before issuing a call to GC collect, i.e.
Runtime.GCSettings.LargeObjectHeapCompactionMode = System.Runtime.GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); // This will cause the LOH to be compacted (once).
Some testing and some C++ later, i've found the reason why i get so much free memory, it's because of IIS instancing the CLR via VM Hoarding (providing a dll to instantiate it without VM Hoarding takes up as much initial memory, but does release most of it as time goes which is the behavior i expect).
So this does fix my reported memory issue, however i still get about 100mb free memory no matter what, and i still think this is due to fragmentation and fragments only being released at once, because the profiler still reports memory fragmentation. So not marking my own answer as an answer in hope someone can shed some light on this or direct me to tools that can either fix this or help me debug the root cause.
It's intriguing that it works differently on the WebDevServer as to IIS...
Is it possible that IIS is using the server garbage-collector, and the WebDev server the workstation garbage collector? The method of garbage collection can affect fragmentation. It'll probably be set in your aspnet.config file. See: http://support.microsoft.com/kb/911716
If you havent found your answer I think the following clues can help you :
Back to the basics : we sometimes forget that the objects can be explicitly set free, call explicitly the Dispose method of the objects (because you didnt mention it, I suppose you do an "object = null" instruction instead).
Use the inherited method, you dont need to implement one, unless your class doesnt have it, which I doubt it.
MSDN Help states about this method :
... There is no performance benefit in implementing the Dispose
method on types that use only managed resources (such as arrays)
because they are automatically reclaimed by the garbage collector. Use
the Dispose method primarily on managed objects that use native
resources and on COM objects that are exposed to the .NET
Framework. ...
Because it says that "they are automatically reclaimed by garbage collector" we can infer that when the method is called does the "releasing thing" (Again Im trying only to give you clues).
Besides I found this interesting article (I suppose ... I didn read it ...completely) : Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework (http://msdn.microsoft.com/en-us/magazine/bb985010.aspx) which states the following in the "Forcing an Object to Clean Up" section :
..., it is also recommended that you add an additional method to
the type that allows a user of the type to explicitly clean up the
object when they want. By convention, this method should be called
Close or Dispose ....
Maybe the answer lies in this article if you read it carefully or just keep investigating in this direction.

Why does .NET reserve so much memory for my application?

When I run my application, in a profiler I see that is uses about 80MB of memory (total committed bytes, performance counter). But when I look at the size of the allocated memory, it is over 400MB!
So my question is, why is .NET reserving so much memory for my application? Is this normal?
you should read Memory Mystery. I had similar questions a while ago and stopped asking myself after reading this.
I read other sources, but I cant find now, use keywords "unreasonable allocation of memory windows OS". In a nutshell, OS gives more than your app require depending upon physically available memory resources
for e.g. if you are running your app on two machines with different RAM, it can be guaranteed that both these machines will have different memory allocations
As you no doubt know, there is a massive difference between actual memory used and allocated. An application's allocated memory doesn't mean that it's actually being used anywhere; all it really means is that the OS has 'marked' a zone of virtual memory (which is exactly that - virtual) ready for use by the application.
The memory isn't necessarily being used or starving other processes - it just could if the app starts to fill it.
This allocated number, also, will likely scale based on the overall memory ecosystem of the machine. If there's plenty of room when an app starts up, then it'll likely grab a larger allocation than if there's less.
That principle is the same as the one which says it's good practise to create a List<T>, say, with a reasonable initial capacity that'll mean a decent number of items can be added before resizing needs to take place. The OS takes the same approach with memory usage.
"Reserving" memory is by no means the same as "allocated" ram. Read the posts Steve and Krishna linked to.
The part your client needs to look at is Private Bytes. But even that isn't exactly a hard number as parts of your app may be swapped to the virtual disk.
In short, unless your Private Bytes section is pretty well out of control OR you have leaks (ie: undisposed unmanaged resources) you (and your client) should ignore this and let the OS manage what is allocated, what's in physical ram and what's swapped out to disk.
It's fairly common for software to issue one large memory request to the underlying operating system, then internally manage its own use of the allocated memory block. So common, in fact, that Windows' (and other operating systems') memory manager explicitly supports the concept, called "uncommitted memory" -- memory that the process has requested but hasn't made use of yet. That memory really doesn't exist as far as bits taking up space on your DRAM chips until the process actually makes use of it. The preallocation of memory effectively costs nothing.
Applications do this for many reasons -- though it's primarily done for performance reasons. An application with knowledge of its own memory usage patterns can optimize its allocator for that pattern; similarly, for address locality reasons, as successive memory requests from the OS won't always be 'next' to each other in memory, which can affect the performance of the CPU cache and could even preclude you from using some optimizations.
.NET in particular allocates space for the managed heap ahead of time, for both of the reasons listed above. In most cases, allocating memory on the managed heap merely involves incrementing a top-of-heap pointer, which is incredibly fast --- and also not possible with the standard memory allocator (which has a more general design to perform acceptably in a fragmented heap, whereas the CLR's GC uses memory compaction to sharply limit the fragmentation of the managed heap), and also not possible if the managed heap itself is fragmented across the process address space due to multiple allocations at different points in time.

How can I tell if the .Net 3.5 garbage collector has run?

I have an application that creates trees of nodes, then tosses them and makes new trees.
The application allocates about 20 MB upon startup. But I tried loading a large file of nodes many times, and the allocated memory went above 700 MB. I thought I would see memory being freed by the garbage collector on occasion.
The machine I'm on has 12 GB of RAM, so maybe it's just that such a "small" amount of memory being allocated doesn't matter to the GC.
I've found a lot of good info on how the GC works, and that it's best not to tell it what to do. But I wanted to verify that it's actually doing anything, and that I'm not somehow doing something wrong in the code that prevents my objects from being cleaned up.
The GC generally runs when either of the scenarios below occur:
You call GC.Collect (which you shouldn't)
Gen0's budget is exhausted
There are some other scenarios as well, but I'll skip those for now.
You didn't tell us how you measured the memory usage, but if you're looking at the memory usage of process itself (e.g. through task manager), then you may not see the numbers you expect. Remember that the .NET runtime essentially has its own memory manager that handles memory usage on behalf of you managed application. The runtime tries to be smart about it so it doesn't allocate and free memory to the OS all the time (those are expensive operations). This question may be relevant as well.
If you're concerned about memory leaks have a look at some of the answers here.
When does the .Net 3.5 garbage collector run?
I thought I would see memory being freed by the garbage collector on occasion.
Since the GC is non-deterministic, you won't be able to necessarily determine when it is going to issue a collection. Short answer: It will run when needed. Trying to analyze your code and predict or assume it should be running at a certain time usually ends up down a rabbit hole.
Answer to: do I leak objects or GC have not need to run yet?
Use memory profiler to see what objects are allocated. As basic step - force garbage collection (GC.Collect) and check out if allocated memory (GC.GetTotalMemory) seems to be reasonable.
If you want to make sure that you're not leaving any unwanted object behind you can use dotTrace memory profiler. It allows you to take two snapshots of objects in memory (taken some time apart) and compare them. You will can clearly see if any old nodes are still hanging around and what is keeping a reference to them and stops them from being collected.
You may have the managed equivalent of a memory leak. Are you maintaining stale references to these objects (i.e., do you have a List<T> or some other object which tracks these nodes)?
Are the subscribing to an event of an object that is not going out of scope? A reference to the subscribee of an event is maintained, so if you don't detach it will keep your objects alive.
You may also be forgetting to Dispose of objects that implement IDisposable. Can't say without seeing your code.
The exact behavior of the GC is implementation defined. You should design your application such that it does not matter. If you need deterministic memory management then you are using the wrong language. Use a tool (RedGate's ANTS profiler will do) to see if you are leaking references somewhere.
This comic is the best way to explain this topic :)
You can monitor the garbage collector activity in .NET 4.0 and beyond with GC.RegisterForFullGCNotification as described in this link: http://www.abhisheksur.com/2010/08/garbage-collection-notifications-in-net.html

C# - GC.GetTotalMemory() Question

I'm creating a C# based Windows service that will run 24x7 for months on end. I'd like to be able to track general memory usage of my service. It doesn't need to be exact down to the byte. A general amount allocated will suffice. I'll be monitoring this for trends in memory consumption. Is GC.GetTotalMemory() an appropriate way to monitor this?
I'm aware of Performance Monitor and the use of counters. However, this service is going to be running on at least 12 different servers. I don't want to track PM on 12 different servers. The service will persist all performance and memory usage information, from all instances, to a central DB to avoid this, and for easier analysis.
Only if it's a purely managed-memory application. If any of it is calling off to unmanaged code, the memory allocated there will never be registered with the garbage collector (unless someone remembers to call GC.AddMemoryPressure).
GC.GetTotalMemory retrieves the amount of memory thought to be allocated. It only knows about memory allocated by the managed components, unless you call GC.AddMemoryPressure to tell it about other memory allocated.
You can get a better idea of the real amount of memory allocated by reading Process.WorkingSet64, Process.VirtualMemory64, and other such properties of the Process class. Just call Process.GetCurrentProcess, and then get what you need.
That said, you're probably better off using the performance counters.
This isn't a direct answer to your question, but was prompted by your intent to keep your service running 24x7 for months.
I would be very aware of heap fragmentation. See this article for an explanation
Use performance counters. You can set them up to log to file.
Alternatively you can read performance counters in your code, and save out the data to a central location.

Categories