C# SharpDX device E_OUTOFMEMORY exception when loading a large scene - c#

I am writing a SharpDX (v4.0.1) application that displays assemblies composed of many (possibly very big) separate parts. Each part has its own vertex and index buffer. When I try to load a very big assembly, I eventually run into E_OUTOFMEMORY exception when trying to create a vertex buffer for a part:
int size = _meshVertices.Length * Utilities.SizeOf<MeshVertex>();
BufferDescription descr = new BufferDescription(size, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
_vertexBuffer = Buffer.Create(_device.Device, _meshVertices, descr);
$exception {"HRESULT: [0x8007000E], Module: [General], ApiCode: [E_OUTOFMEMORY/Out of memory], Message: Not enough storage is available to complete this operation.\r\n"} SharpDX.SharpDXException
When I was googling a possible solution, I found out that there is a size limit on a resource - but this doesn't seem to be the case as it happens even for smaller parts.
Another solution I tried was using the MemoryFailPoint class to try to reserve the memory before creating the buffer. But I suppose this checks for available RAM, while I believe the exception is caused by having not enough GPU memory. Anyway, this seems to alleviate the problem a bit but it doesn't fix it completely. It just takes longer before it fails.
int size = _meshVertices.Length * Utilities.SizeOf<MeshVertex>();
using (System.Runtime.MemoryFailPoint memFailPoint = new System.Runtime.MemoryFailPoint(size / 1024 / 0124))
{
BufferDescription descr = new BufferDescription(size, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
_vertexBuffer = Buffer.Create(_device.Device, _meshVertices, descr);
}
My problem is that after I get the exception, the device is lost. I assume that the resources such as vertex buffers are bound to the device, so to recreate the device after it is lost would mean to recreate all the resources which would be very time consuming.
So preferably, I am looking for a way to know that the device does not have enough memory before I try to create the buffer, so that I could just discard the single part if needed. Is it possible to do something like that?
Any help would be very much appreciated. I am new to SharpDX so if you think I misunderstood the problem and something else is causing it, please let me know as well. Thanks in advance for any input on this.

Related

Emgu cv high resolution images stitching Issue

I am using EmguCV library to stitch images. Its working fine for small images but getting exception when processing high resolution images or images above 20MB size or even if I try to process more than 30 images it fails.
Libraries I am using
Emgu.CV.UI
Emgu.CV.UI.GL
Emgu.CV.World
opencv_core2410
opencv_imgproc2410
Code
List<Image<Bgr, Byte>> sourceImages = new List<Image<Bgr,byte>>();
foreach (string path in ImgPath)
sourceImages.Add(new Image<Bgr, Byte>(path));
using (Stitcher stitcher = new Stitcher(false))
{
using (VectorOfMat vm = new VectorOfMat())
{
Mat result = new Mat();
vm.Push(sourceImages.ToArray());
stitcher.Stitch(vm, result);
if (result.Bitmap != null)
{
result.Bitmap.Save(Application.StartupPath + "\\imgs\\StitchedImage.png");
}
else
{
MessageBox.Show("Some thing went wrong"); return null;
}
}
}
Exception
((Emgu.CV.MatDataAllocator)(result))._dataHandle.Target' threw an exception of type 'System.InvalidOperationException
Image
I was fairly certain that you are running into a memory issue and so I went ahead and made a simple console app targeting .Net 4.7.2, using the latest EmguCV package from NuGet, which is version 3.4.3.3016, and using the code below on sample data from Adobe, which can be downloaded here. If I compile as "AnyCPU" with "prefer 32 bit" and run this code against the rio image set (I loaded up the png's for the test purposes) and let it run the memory will slowly jump up until it hits close to 3 GB and then shortly thereafter crashes giving the exception about the refcount. Pretty clearly a memory issue. I then recompiled targeting 64 bit and was able to successfully run the code. The memory usage peaked out around 6 GB. So, with that in mind, I would be fairly certain that your issue is also memory related. You have yet to answer the question on whether you are building a 64 bit app or not, but based on what you are seeing, I would guess that you are not. So, the solution to your problem is to compile as 64 bit and then be sure you have enough memory. With the rio test set it jumped up to close to 6 GB. Without having your images I can't tell you how large it might grow, but these type of operations are pretty memory intensive - so more is better. This would explain both the issue with large image files and the issue with a large number of small image files. I was able to successfully process sets of images that were between 10 and 20 images using a 32 bit build, but as soon as I moved to the 50+ image sets it would only work with a 64 bit build due to the memory requirements.
var images = Directory.EnumerateFiles(#"C:\test\adobe\rio", "*.png", SearchOption.TopDirectoryOnly).Select(x => new Mat(x)).ToArray();
using(var stitcher = new Stitcher(false))
{
using (var vm = new VectorOfMat(images))
{
var result = new Mat();
stitcher.Stitch(vm, result);
result.Bitmap.Save(#"C:\test\adobe\rio_stitched.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
result.Dispose();
}
}
foreach (var image in images) { image.Dispose(); }

How can I get amount of "free" memory in Unity3d on iOS?

I have a problem: if I try to create a screenshot with Texture2D.ReadPixels, the game crashes on some devices (most notably, iPod 4G). Supposedly, it happens because of low memory. Before creating screenshot, I want to detect, if I can allocate the required amount of memory safely, and show the warning to the player if I suppose I will crash.
However, it seems that resources like textures are managed outside of Mono VM. System.GC.GetTotalMemory returns 9mb, when I have atlases as big as 16mb. So, it seems that I have to write a plugin for that.
(There was a section describing that I didn't receive low memory warnings, but it seems that I was mistaken about it, and on Objective-C level, the warnings are successfully raised).
How can I get the amount of "free" memory that I can allocate without crashing? May be there's some other way to achieve the functionality I want?
To explain your own answer: This is not objective-C code, but plain old C, using the Mach APIs in order to obtain statistics from the kernel. Mach APIs are low level APIs exported by XNU, which is a hybrid kernel composed of the top layer (which exports BSD APIs, like the usual system calls we know and love from UN*X), and the bottom layer- which is Mach.
The code uses the Mach "host" abstraction which (among other things) provides statistics about OS level utilization of resources.
Specifically, here's a full annotation:
#import <mach/mach.h> // Required for generic Mach typedefs, like the mach_port_t
#import <mach/mach_host.h> // Required for the host abstraction APIs.
extern "C" // C, rather than objective-c
{
const int HWUtils_getFreeMemory()
{
mach_port_t host_port;
mach_msg_type_number_t host_size;
vm_size_t pagesize;
// First, get a reference to the host port. Any task can do that, and this
// requires no privileges
host_port = mach_host_self();
host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
// Get host page size - usually 4K
host_page_size(host_port, &pagesize);
vm_statistics_data_t vm_stat;
// Call host_statistics, requesting VM information.
if (host_statistics(host_port, // As obtained from mach_host_self()
HOST_VM_INFO, // Flavor - many more in <mach/host_info.h>(host_info_t)&vm_stat, // OUT - this will be populated
&host_size) // in/out - sizeof(vm_stat).
!= KERN_SUCCESS)
NSLog(#"Failed to fetch vm statistics"); // log error
/* Stats in bytes */
// Calculating total and used just to show all available functionality
// This part is basic math. Counts are all in pages, so multiply by pagesize
// Memory used is sum of active pages, (resident, used)
// inactive, (resident, but not recently used)
// and wired (locked in memory, e.g. kernel)
natural_t mem_used = (vm_stat.active_count +
vm_stat.inactive_count +
vm_stat.wire_count) * pagesize;
natural_t mem_free = vm_stat.free_count * pagesize;
natural_t mem_total = mem_used + mem_free;
NSLog(#"used: %u free: %u total: %u", mem_used, mem_free, mem_total);
return (int) mem_free;
}
So, with help of some objective-c gurus I've been able to find code snippet that is doing what I wanted. I must warn you that I don't understand how exactly this Obj-C code does what it does, and that it is considered a 'hack'; however, it turned out to be the best solution to my problem.
In plugin .mm file:
#import <mach/mach.h>
#import <mach/mach_host.h>
extern "C"
{
const int HWUtils_getFreeMemory()
{
mach_port_t host_port;
mach_msg_type_number_t host_size;
vm_size_t pagesize;
host_port = mach_host_self();
host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
host_page_size(host_port, &pagesize);
vm_statistics_data_t vm_stat;
if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS)
NSLog(#"Failed to fetch vm statistics");
/* Stats in bytes */
// Calculating total and used just to show all available functionality
natural_t mem_used = (vm_stat.active_count +
vm_stat.inactive_count +
vm_stat.wire_count) * pagesize;
natural_t mem_free = vm_stat.free_count * pagesize;
natural_t mem_total = mem_used + mem_free;
NSLog(#"used: %u free: %u total: %u", mem_used, mem_free, mem_total);
return (int) mem_free;
}
}
In my hardware utility file:
[DllImport("__Internal")]
static extern int HWUtils_getFreeMemory();
#if UNITY_IPHONE && !UNITY_EDITOR
public static int GetFreeMemory()
{
return HWUtils_getFreeMemory();
}
#endif
I am also getting some GL_OUT_OF_MEMORY error when reading pixels on Unity3D (iOS and Android.). After I have done many testing, the main problem is the my game's draw call is too high when I take the screenshot using ReadPixels method (around > 40 draw calls). Maybe you should investigate this factor too.

Fellow Oak DICOM - changing image window level

I am not an experienced programmer, just need to add a DICOM viewer to my VS2010 project. I can display the image in Windows Forms, however can't figure out how to change the window center and width. Here is my script:
DicomImage image = new DicomImage(_filename);
int maxV = image.NumberOfFrames;
sbSlice.Maximum = maxV - 1;
image.WindowCenter = 7.0;
double wc = image.WindowCenter;
double ww = image.WindowWidth;
Image result = image.RenderImage(0);
DisplayImage(result);
It did not work. I don't know if this is the right approach.
The DicomImage class was not created with the intention of it being used to implement an image viewer. It was created to render preview images in the DICOM Dump utility and to test the image compression/decompression codecs. Maybe it was a mistake to include it in the library at all?
It is difficult for me to find fault in the code as being buggy when it is being used for something far beyond its intended functionality.
That said, I have taken some time to modify the code so that the WindowCenter/WindowWidth properties apply to the rendered image. You can find these modifications in the Git repo.
var img = new DicomImage(fileName);
img.WindowCenter = 2048.0;
img.WindowWidth = 4096.0;
DisplayImage(img.RenderImage(0));
I looked at the code and it looked extremely buggy. https://github.com/rcd/fo-dicom/blob/master/DICOM/Imaging/DicomImage.cs
In the current buggy implementation setting the WindowCenter or WindowWidth properties has no effect unless Dataset.Get(DicomTag.PhotometricInterpretation) is either Monochrome1 or Monochrome2 during Load(). This is already ridiculous, but it still cannot be used because the _renderOptions variable is only set in a single place and is immediately used for the _pipeline creation (not giving you chance to change it using the WindowCenter property). Your only chance is the grayscale _renderOptions initialization: _renderOptions = GrayscaleRenderOptions.FromDataset(Dataset);.
The current solution: Your dataset should have
DicomTag.WindowCenter set appropriately
DicomTag.WindowWidth != 0.0
DicomTag.PhotometricInterpretation == Monochrome1 or Monochrome2
The following code accomplishes that:
DicomDataset dataset = DicomFile.Open(fileName).Dataset;
//dataset.Set(DicomTag.WindowWidth, 200.0); //the WindowWidth must be non-zero
dataset.Add(DicomTag.WindowCenter, "100.0");
//dataset.Add(DicomTag.PhotometricInterpretation, "MONOCHROME1"); //ValueRepresentations tag is broken
dataset.Add(new DicomCodeString(DicomTag.PhotometricInterpretation, "MONOCHROME1"));
DicomImage image = new DicomImage(dataset);
image.RenderImage();
The best solution: Wait while this buggy library is fixed.

Memory Leak using opencvsharp

So I'm trying to use opencvsharp to create an augmented reality tracker, and I'm having a problem with a memory leak.
I am trying to identify rectangles from a camera image (which are my markers) I'm fairly sure that the offending code is
CvSeq<CvPoint> firstcontour = null;
List<MarkerRectangle> rectangles = new List<MarkerRectangle>();
CvMemStorage storage = Cv.CreateMemStorage(0);
CvMemStorage storagepoly = Cv.CreateMemStorage(0);
IplImage gsImageContour = Cv.CreateImage(Cv.GetSize(thresholdedImage), thresholdedImage.Depth, thresholdedImage.NChannels);
//find contours
Cv.Copy(thresholdedImage, gsImageContour, null);
int contourCount = Cv.FindContours(gsImageContour, storage, out firstcontour, CvContour.SizeOf,
ContourRetrieval.CComp, ContourChain.ApproxSimple, Cv.Point(0, 0));
CvSeq<CvPoint> polycontour = firstcontour.ApproxPoly(CvContour.SizeOf, storagepoly, ApproxPolyMethod.DP, 4, true);
I'm fairly sure the offending lines are:
int contourCount = Cv.FindContours(gsImageContour, storage, out firstcontour, CvContour.SizeOf, ContourRetrieval.CComp, ContourChain.ApproxSimple, Cv.Point(0, 0));
and/or
CvSeq<CvPoint> polycontour = firstcontour.ApproxPoly(CvContour.SizeOf, storagepoly, ApproxPolyMethod.DP, 4, true);
However I'm not sure what I might be doing wrong, as they are not stored in any global variables i would expect the memory to be freed when they get to the end of the method.
It may be this bug, in which case I suspect I lack the technical expertise to fix it, as, with the exception of a couple of courses at uni, i haven't worked with unmanaged code. However i tried implementing the fix it lists under additional information and that didn't work so instead i'm thinking maybe it is not the cause of my problem.

get itemtext from SysListView32

i am trying to get the text in SysListView32 from another app by C#.
i can get the LVM_GETITEMCOUNT well but LVM_GETITEMW = 0x1000 + 13 always returns -1. how can i get the text by C#? i am new. thanks very much!
ParenthWnd = FindWindow(ParentClass, ParentWindow);
if (!ParenthWnd.Equals(IntPtr.Zero))
{
zWnd = FindWindowEx(ParenthWnd, zWnd, zClass, zWindow);
if (!zWnd.Equals(IntPtr.Zero))
{
int user = SendMessage(zWnd, LVM_GETITEMCOUNT, 0, 0);
}
You need to work harder to read and write the LVITEM memory since you are working with a control owned by another process. You therefore need to read and write memory in that process. You can't do that without calling ReadProcessMemory, WriteProcessMemory etc.
The most commonly cited example of the techniques involved is this Code Project article: Stealing Program's Memory. Watch out for 32/64 bit gotchas.

Categories