Bitmap constructor with scan0 and unmanaged resources - c#

I have written some code to retrieve frames from a camera, along with information obtained from these frames, and to display them on a form.
All the data that I get is unmanaged as it comes form a library of my own written in c++ and working with OpenCv.
I prefer getting all the data at once with a single function call and not using a wrapper to OpenCv that would PInvoke several times to get the same result. Furthermore for me the code is easier to maintain and I have much more control on everything that is going on and I have many other reason to prefer this approach.
Everything is ok, (seemingly) perfectly working and I’m happy, but… there is something I would like to understand better with your help.
At a certain point I create a bitmap with the unmanaged pixel data with the method
public Bitmap(int width,int height,int stride,PixelFormat format, IntPtr scan0);
My question are the following (I have some idea, but just tell me if I’m right) :
1) I don’t release the data pointed by scan0 as I think that, once the data is owned by the bitmap object, it will do the job for me via its Garbage Collection. Am I right?
2) I don’t like the fact that a new instance of bitmap is created and allocated every time (apart from the pixel data), but I suppose that there is no better way of getting a Bitmap out of unmanaged data.
3) I think that there is no need to tell the Garbage Collector that there is a big amount of data to clean up with GC.AddMemoryPressure(…) as it knows it, estimating from the information provided with the initialization.
EDIT
I've found on the documentation
The caller is responsible for allocating and freeing the block of memory specified by the scan0 parameter. However, the memory should not be released until the related Bitmap is released.
The only way to do this is that the Bitmap object created in such a way leaves the data untouched and doesn't change its position in memory.

1) I don’t release the data pointed by scan0 as I think that, once the
data is owned by the bitmap object, it will do the job for me via its
Garbage Collection. Am I right?
No, the garbage collector knows nothing about the object, which you've initialized on the unmanaged side, that is why it is unmanaged. So you have to call delete in the unmanaged code to release the allocated memory.
2) I don’t like the fact that a new instance of bitmap is created and
allocated every time (apart from the pixel data), but I suppose that
there is no better way of getting a Bitmap out of unmanaged data.
There is a way and a keyword is unsafe. You can run the c++ code inside of the unsafe block, but you must allow this in the c# project settings. So you can reuse every pixel of once initialized bitmap
unsafe
{
byte stlThres = 115;
Bitmap myBmp = ...; // init the bitmap
var data = myBmp.LockBits(new Rectangle(0, 0, myBmp.Width, myBmp.Height), ImageLockMode.WriteOnly, myBmp.PixelFormat);
for (int y = 0; y < data.Height; y++)
{
byte* row = (byte*)data.Scan0 + (y * data.Stride);
//...
}
3) I think that there is no need to tell the Garbage Collector that
there is a big amount of data to clean up with GC.AddMemoryPressure(…)
as it knows it, estimating from the information provided with the
initialization.
If you created a managed Bitmap object (with new), it will be released automatically after it gets out of scope or will not be referenced any longer.

Related

Image Resource Memory

Since I had a really nasty problem with an not too obvious unmanaged resource last month, I got a little hypernervous regarding memory leaking problems.
I was just coding on a very simple test app with a button with two different pictures on it and noticed I am not quite sure if I have a "problem" here or not...
If I have 2 picture resources Pic1 and Pic2 and an ImageButton-Object, which is just some object inherited from UserControl and with a changed OnPaint:
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
//stuff
if (this.keyStatus))
{ this.imageButton.DefaultImage = Resource1.Pic1; }
else
{ this.imageButton.DefaultImage = Resource1.Pic2; }
e.Graphics.DrawImage(this.defaultImage, this.ClientRectangle);
}
Beside OnPaint not being a good place for assigning DefaultImage (its just here to show you what I mean in a short piece of code), I am just assinging a reference to my precompiled resource here, am I? I am not creating a copy as I would if I would call it with new Bitmap(Resource1.Pic1).
So if I change keyStatus every 5 seconds, I would have a very annoying picture on my screen with a lot of changing, but no problems that the picture changing or turning it invisible from time to time leaks memory. Correct?
Thanks a lot!
How object references work
Say you have a random object. The object is a class type (not a value type) and not IDisposable. Basically that means the following:
// y is some object
var x = y;
Now, x doesn't copy all the data from y, but simply makes a new reference to the contents of y. Simple.
To ensure that there won't be memory leaks, the GC keeps track of all objects and (periodically) checks which objects are reachable. If an object is still reachable, it won't be deleted - if it's not, it will be removed.
And then there was unmanaged code
As long as you stick to managed code, everything is fine. The moment you run into unmanaged code (say: GDI+, which is the native counterpart of a lot of System.Drawing stuff) you need to do extra book-keeping to get rid of the code. After all, the .NET runtime doesn't know much about the unmanaged data - it merely knows that there is a pointer. Therefore, the GC will cleanup the pointer, but not the data -- which would result in memory leaks.
Therefore, the guys from .NET added IDisposable. By implementing IDisposable, you can implement extra (unmanaged) cleanup, such as releasing unmanaged memory, closing files, closing sockets, etc.
Now, the GC knows about finalizers, which are implemented as part of the Disposable pattern (details: https://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.110).aspx ). However, you usually don't want to wait for a GC run to clean up unmanaged resources. So, it's generally a good idea to call Dispose() when an object can be cleaned up and has unmanaged resources.
As is the case with System.Drawing.Bitmap, which implements IDisposable.
In most cases, you can simply wrap IDisposable in a using statement, which will call 'Dispose()' for you in a nice try/finally clause. e.g.:
using (var myBitmap = new Bitmap(...))
{
// use myBitmap
}
// myBitmap including all resources are gone.
What about resource bitmaps
#HansPassant pointed out that resource bitmaps generate a new Bitmap every time you access a bitmap property. This basically means that the bitmaps are copied and need to be disposed.
In other words:
// Free the old bitmap if it exists:
if (this.imageButton.DefaultImage != null)
{
this.imageButton.DefaultImage.Dispose();
this.imageButton.DefaultImage = null;
}
// assign new imageButton.DefaultImage
So, this solves the memory leak, but will give you a lot of data that is copied around.
If you don't want to dispose
Here comes the part why I was surprised by the remark from Hans :) Basically you assign a Bitmap to a button every time, so you don't want to copy the data over and over again - that makes little sense.
Therefore, you might get the idea to wrap the resource into a 'static' container and simply don't deallocate it at all:
static Bitmap myPic1 = Resource1.Pic1;
static Bitmap myPic2 = Resource1.Pic2;
...
if (this.keyStatus))
{
this.imageButton.DefaultImage = myPic1;
}
else
{
this.imageButton.DefaultImage = myPic2;
}
This works, but will give you issues if you at some point decide to generate images as well. To illustrate, say we change the code like this::
if (this.keyStatus))
{
this.imageButton.DefaultImage = myPic1; // #1 don't dispose
}
else
{
Bitmap myPic3 = CreateFancyBitmap(); // #2 do dispose
this.imageButton.DefaultImage = myPic3;
}
Now, the issue here is with the combination. myPic1 is a static object and shouldn't be disposed. On the other hand, myPic3 is not, and should be disposed. If you do call Dispose(), you'll get a nasty exception at #1, because the data is no longer there. There's no proper way to distinguish the two.

Garbage Collector too slow when working with large images

I am using Emgu OpenCV to grab images from a webcam and want to visualize them with WPF Image Control.
So I need to convert the image from Mat to something compatible with Image control. So I took this class from the Emgu examples:
public static class BitmapSourceConvert
{
/// <summary>
/// Delete a GDI object
/// </summary>
/// <param name="o">The poniter to the GDI object to be deleted</param>
/// <returns></returns>
[DllImport("gdi32")]
private static extern int DeleteObject(IntPtr o);
/// <summary>
/// Convert an IImage to a WPF BitmapSource. The result can be used in the Set Property of Image.Source
/// </summary>
/// <param name="image">The Emgu CV Image</param>
/// <returns>The equivalent BitmapSource</returns>
public static BitmapSource ToBitmapSource(IImage image)
{
using (System.Drawing.Bitmap source = image.Bitmap)
{
IntPtr ptr = source.GetHbitmap(); //obtain the Hbitmap
BitmapSource bs = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
ptr,
IntPtr.Zero,
Int32Rect.Empty,
System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
DeleteObject(ptr); //release the HBitmap
return bs;
}
}
}
This works like a charm for small images (640 x 480 for instance). When using the task Manager (I am on Windows 8), I see the used memory increasing and decreasing. Works fine.
But when using larger images like 1920x1080 the application crashes after a short period of time with an exception saying no more memory. When looking at the task manager again, I can see the memory consumption go up, once go down and then go up till the exception is thrown.
It feels like the garbage collector works not often enough to free all the space.
So I tried to start the garbage collector manually by adding GC.Collect() somewhere in the function. And it works again. Even with the large images.
I think calling the garbage collector manually is neither good style nor performant. Can anyone please give hints on how to solve this without calling GC.Collect()?
Finally, I think the problem is, that garbage collector has no idea of how big the images are and therefore is not able to plan a reasonable schedule. I found the Methods
GC.AddMemoryPreasure(long bytesAllocated)
GC.RemoveMemoryPreasure(long bytesAllocated)
These Methods tell the garbage collector when large unmanaged objects are allocated and released so the garbage collector can plan his schedule in a better way.
The following code works without any memory problems:
public static BitmapSource ToBitmapSource(IImage image)
{
using (System.Drawing.Bitmap source = image.Bitmap)
{
IntPtr ptr = source.GetHbitmap(); //obtain the Hbitmap
long imageSize = image.Size.Height*image.Size.Width*4; // 4 bytes per pixel
GC.AddMemoryPressure(imageSize);
BitmapSource bs = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
ptr,
IntPtr.Zero,
Int32Rect.Empty,
System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
DeleteObject(ptr); //release the HBitmap
GC.RemoveMemoryPressure(imageSize);
return bs;
}
}
From where IImage parameter come? Dispose it after you finish with it.
So I tried to start the garbage collector manually by adding
GC.Collect() somewhere in the function. And it works again. Even with
the large images.
Image implements finalizer, if you don't dispose them. It will make those instances to live more than one GC cycles. Probably that's your issue.
Finalizer is the last point it can release unmanaged (managed too) resources if developer don't call the Dispose. When you call the Dispose it Supress the finalization and it will make them reachable for GC straight away.
can see the memory consumption go up, once go down and then go up till
the exception is thrown. It feels like the garbage collector works not
often enough to free all the space.
This is not quite right normally. But might be possible when you open/close images frequently and finalization queue is growing up.
Here is a good article for you : The Dangers of the Large Object Heap...
Happens when one uses the wrong tool for the job. A video is not really a set of bitmaps - there are better ways to do it.
What I did last time I had to do that was using Direct3d. There is a WPF integration and it is quite easy to set up a bitmap there. Allows a ton of manipulation in the video stream, too ;) THen you push the image directly into the Direct3d surface. Finished.
No code examples - sorry. It is a couple of years ago and I don't have the code ready.

How to free memory in C# that is allocated in C++

I have a C++ dll which is reading video frames from a camera. These frames get allocated in the DLL returned via pointer to the caller (a C# program).
When C# is done with a particular frame of video, it needs to clean it up. The DLL interface and memory management is wrapped in a disposable class in C# so its easier to control things. However, it seems like the memory doesn't get freed/released. The memory footprint of my process grows and grows and in less than a minute, I get allocation errors in the C++ DLL as there isn't any memory left.
The video frames are a bit over 9 MB each. There is a lot of code, so I'll simply provide the allocation/deallocations/types/etc.
First : Allocation in C++ of raw buffer for the camera bytes.
dst = new unsigned char[mFrameLengthInBytes];
Second : transfer from the raw pointer back to across the DLL boundary as an unsigned char * and into an IntPtr in C#
IntPtr pFrame = VideoSource_GetFrame(mCamera, ImageFormat.BAYER);
return new VideoFrame(pFrame, .... );
So now the IntPtr is passed into the CTOR of the VideoFrame class. Inside the CTOR the IntPtr is copied to an internal member of the class as follows :
IntPtr dataPtr;
public VideoFrame(IntPtr pDataToCopy, ...)
{
...
this.dataPtr = pDataToCopy;
}
My understanding is that is a shallow copy and the class now references the original data buffer. The frame is used/processed/etc. Later, the VideoFrame class is disposed and the following is used to clean up the memory.
Marshal.FreeHGlobal(this.dataPtr);
I suspect the problem is that... dataPtr is an IntPtr and C# has no way to know that the underlying buffer is actually 9 MB, right? Is there a way to tell it how much memory to release at that point? Am I using the wrong C# free method? Is there one specifically for this sort of situation?
You need to call the corresponding "free" method in the library you're using.
Memory allocated via new is part of the C++ runtime, and calling FreeHGlobal won't work. You need to call (one way or the other) delete[] against the memory.
If this is your own library then create a function (eg VideoSource_FreeFrame) that deletes the memory. Eg:
void VideoSource_FreeFrame(unsigned char *buffer)
{
delete[] buffer;
}
And then call this from C#, passing in the IntPtr you got back.
You need to (in c++) delete dst;. That means you need to provide an API that the C# code can call, like FreeFrame(...), which does exactly that.
I agree with the first answer. Do NOT free it in C# code, using any magical, liturgical incantations. Write a method in C++ that free's the memory, and call it from your C# code. Do NOT get into the habit of allocationg memory in one heap (native) and freeing it another heap (managed), that's just bad news.
Remember one of the rules from the book effective C++: Allocate memory in the constructor, and deallocate in the destructor. And if you can't do it in the destructor, do it in an in-class method, not some global (or even worse) friend function.

Efficient image manipulation in C#

I'm using the System.Drawing classes to generate thumbnails and watermarked images from user-uploaded photos. The users are also able to crop the images using jCrop after uploading the original. I've taken over this code from someone else, and am looking to simplify and optimize it (it's being used on a high-traffic website).
The previous guy had static methods that received a bitmap as a parameter and returned one as well, internally allocating and disposing a Graphics object. My understanding is that a Bitmap instance contains the entire image in memory, while Graphics is basically a queue of draw operations, and that it is idempotent.
The process currently works as follows:
Receive the image and store it in a temporary file.
Receive crop coordinates.
Load the original bitmap into memory.
Create a new bitmap from the original, applying the cropping.
Do some crazy-ass brightness adjusting on the new bitmap, maybe (?) returning a new bitmap (I'd rather not touch this; pointer arithmetics abound!), lets call this A.
Create another bitmap from the resulting one, applying the watermark (lets call this B1)
Create a 175x175 thumbnail bitmap from A.
Create a 45x45 thumbnail bitmap from A.
This seems like a lot of memory allocations; my question is this: is it a good idea to rewrite portions of the code and reuse the Graphics instances, in effect creating a pipeline? In effect, I only need 1 image in memory (the original upload), while the rest can be written directly to disk. All the generated images will need the crop and brightness transformations, and a single transformation that is unique to that version, effectively creating a tree of operations.
Any thought or ideas?
Oh, and I should probably mention that this is the first time I'm really working with .NET, so if something I say seems confused, please bear with me and give me some hints.
Reusing Graphics objects will probably not result in significant performance gain.
The underlying GDI code simple creates a device context for the bitmap you have loaded in RAM (a Memory DC).
The bottleneck of your operation appears to be in loading the image from disk.
Why reload the image from disk? If it is already in a byte array in RAM, which it should be when it is uploaded - you can just create a memory stream on the byte array and then create a bitmap from that memory stream.
In other words, save it to the disk, but don't reload it, just operate on it from RAM.
Also, you shouldn't need to create a new bitmap to apply the watermark (depending on how it'd done.)
You should profile the operation to see where it needs improvement (or even if it needs to be improved.)
The process seems reasonable. Each image has to exist in memory before it is saved to disk - so each version of your thumbnails will be in memory first. The key to making sure this works efficiently is to Dispose your Graphics and Bitmap objects. The easiest way to do that is with the using statement.
using( Bitmap b = new Bitmap( 175, 175 ) )
using( Graphics g = Graphics.FromBitmap( b ) )
{
...
}
I completed a similar project a while ago and did some practical testing to see if there was a difference in performance if I reused the Graphics object rather than spin up a new one for every image. In my case, I was working on a steady stream of large numbers of images (>10,000 in a "batch"). I found that I did get a slight performance increase by reusing the Graphics object.
I also found I got a slight increase by using GraphicsContainers in the Graphics object to easily swap different states into/out of the object as it was used to perform various actions. (Specifically, I had to apply a crop and draw some text and a box (rectangle) on each image.) I don't know if this makes sense for what you need to do. You might want to look at the BeginContainer and EndContainer methods in the Graphics object.
In my case, the difference was slight. I don't know if you would get more or less improvement in your implementation. But since you will incur a cost in rewriting your code, you might want to consider finishing the current design and doing some perf tests before rewriting. Just a thought.
Some links you might find useful:
Using Nested Graphics Containers
GraphicsContainer Class
I am only going to throw this out there casually, but if you wanted a quick 'guide' to best practices for working with images, look at the Paint.NET project. For free high-proformance tools for doing image manipulation, look at AForge.NET.
The benefit of AForge is to allow you to do alot of these steps without creating a new bitmap every time. If this is for a website, I can almost guarentee that the code you are working with will be the performance bottleneck for the application.

How much memory does a C#/.NET object use?

I'm developing an application which currently have hundreds of objects created.
Is it possible to determine (or approximate) the memory allocated by an object (class instance)?
You could use a memory profiler like
.NET Memory Profiler (http://memprofiler.com/)
or
CLR Profiler (free) (http://clrprofiler.codeplex.com/)
A coarse way could be this in-case you wanna know whats happening with a particular object
// Measure starting point memory use
GC_MemoryStart = System.GC.GetTotalMemory(true);
// Allocate a new byte array of 20000 elements (about 20000 bytes)
MyByteArray = new byte[20000];
// Obtain measurements after creating the new byte[]
GC_MemoryEnd = System.GC.GetTotalMemory(true);
// Ensure that the Array stays in memory and doesn't get optimized away
GC.KeepAlive(MyByteArray);
process wide stuff could be obtained perhaps like this
long Process_MemoryStart = 0;
Process MyProcess = System.Diagnostics.Process.GetCurrentProcess();
Process_MemoryStart = MyProcess.PrivateMemorySize64;
hope this helps ;)
The ANTS memory profiler will tell you exactly how much is allocated for each object/method/etc.
Here's a related post where we discussed determining the size of reference types.
You can also use WinDbg and either SOS or SOSEX (like SOS with with a lot more commands and some existing ones improved) WinDbg extensions. The command you would use to analyze an object at a particular memory address is !objsize
One VERY important item to remember is that !objsize only gives you the size of the class itself and DOES NOT necessarily include the size of the aggregate objects contained inside the class - I have no idea why it doesn't do this as it is quite frustrating and misleading at times.
I've created 2 Feature Suggestions on the Connect website that ask for this ability to be included in VisualStudio. Please vote for the items of you would like to see them added as well!
https://connect.microsoft.com/VisualStudio/feedback/details/637373/add-feature-to-debugger-to-view-an-objects-memory-footprint-usage
https://connect.microsoft.com/VisualStudio/feedback/details/637376/add-feature-to-debugger-to-view-an-objects-rooted-references
EDIT:
I'm adding the following to clarify some info from the answer provided by Charles Bretana:
the OP asked about the size of an 'object' not a 'class'. An object is an instance of a class. Maybe this is what you meant?
The memory allocated for an object does not include the JITted code. The JIT code lives in its own 'JIT Code Heap'.
The JIT only compiles code on a method by method basis - not at a class level. So if a method never gets called for a class, it is never JIT compiled and thus never has memory allocated for it on the JIT Code Heap.
As an aside, there are about 8 different heaps that the CLR uses:
Loader Heap: contains CLR structures and the type system
High Frequency Heap: statics, MethodTables, FieldDescs, interface map
Low Frequency Heap: EEClass, ClassLoader and lookup tables
Stub Heap: stubs for CAS, COM wrappers, P/Invoke
Large Object Heap: memory allocations that require more than 85k bytes
GC Heap: user allocated heap memory private to the app
JIT Code Heap: memory allocated by mscoreee (Execution Engine) and the JIT compiler for managed code
Process/Base Heap: interop/unmanaged allocations, native memory, etc
HTH
Each "class" requires enough memory to hold all of it's jit-compiled code for all it's members that have been called by the runtime, (although if you don't call a method for quite some time, the CLR can release that memory and re-jit it again if you call it again... plus enough memory to hold all static variables declared in the class... but this memory is allocated only once per class, no matter how many instances of the class you create.
For each instance of the class that you create, (and has not been Garbage collected) you can approximate the memory footprint by adding up the memory usage by each instance-based declared variable... (field)
reference variables (refs to other objects) take 4 or 8 bytes (32/64 bit OS ?)
int16, Int32, Int64 take 2,4, or 8 bytes, respectively...
string variable takes extra storage for some meta data elements, (plus the size of the address pointer)
In addition, each reference variable in an object could also be considered to "indirectly" include the memory taken up on the heap by the object it points to, although you would probably want to count that memory as belonging to that object not the variable that references it...
etc. etc.
To get a general sense for the memory allocation in your application, use the following sos command in WinDbg
!dumpheap -stat
Note that !dumpheap only gives you the bytes of the object type itself, and doesn't include the bytes of any other object types that it might reference.
If you want to see the total held bytes (sum all the bytes of all objects referenced by your object) of a specific object type, use a memory profiler like dot Trace - http://www.jetbrains.com/profiler/
If you can - Serialize it!
Dim myObjectSize As Long
Dim ms As New IO.MemoryStream
Dim bf As New Runtime.Serialization.Formatters.Binary.BinaryFormatter()
bf.Serialize(ms, myObject)
myObjectSize = ms.Position
There is the academic question of What is the size of an object at runtime? And that is interesting, but it can only be properly answered by a profiler that is attached to the running process. I spent quite a while looking at this recently and determined that there is no generic method that is accurate and fast enough that you would ever want to use it in a production system. Simple cases like arrays of numerical types have easy answers, but beyond this the best answer would be Don't bother trying to work it out. Why do you want to know this? Is there other information available that could serve the same purpose?
In my case I ended up wanting to answer this question because I had various data that were useful, but could be discarded to free up RAM for more critical services. The poster boys here are an Undo Stack and a Cache.
Eventually I concluded that the right way to manage the size of the undo stack and the cache was to query for the amount of available memory (it's a 64-bit process so it is safe to assume it is all available) and then allow more items to be added if there is a sufficiently large buffer of RAM and require items to be removed if RAM is running low.
For any Unity Dev lurking around for an answer, here's a way to compare two different class memory allocations inspired by #varun's answer:
void Start()
{
var totalMemory = System.GC.GetTotalMemory(false);
var class1 = new Class1[100000];
System.GC.KeepAlive(class1);
for (int i = 0; i < 100000; i++)
{
class1[i] = new Class1();
}
var newTotalMemory = System.GC.GetTotalMemory(false);
Debug.Log($"Class1: {newTotalMemory} - {totalMemory} = {newTotalMemory - totalMemory}");
var class2 = new Class2[100000];
System.GC.KeepAlive(class2);
for (int i = 0; i < 100000; i++)
{
class2[i] = new Class2(10, 10);
}
var newTotalMemory2 = System.GC.GetTotalMemory(false);
Debug.Log($"Class2: {newTotalMemory2} - {newTotalMemory} = {newTotalMemory2 - newTotalMemory}");
}

Categories