I am developing an app with Mono for Android.
I have been struggling with Out of memory exceptions for the last few days and am starting to lose hope!
I have a ListView displaying anything from 200 to 600 items. These items consist of a bitmap thumbnail and some text.
I am decoding the Bitmap asynchronously using an AsyncTask, here is the code:
public class BitmapWorkerTask : AsyncTask
{
private WeakReference imageViewReference;
public string thisURL = "";
private int sampleSize = 0;
private int reqHeight = 0;
private int reqWidht = 0;
public BitmapWorkerTask(ImageView imageView, int pSampleSize, int pReqWidth, int pReqHeight)
{
//_____________________________________________________________________
// Use a WeakReference to ensure the ImageView can be garbage collected
imageViewReference = new WeakReference(imageView);
reqHeight = pReqHeight;
reqWidht = pReqWidth;
sampleSize = pSampleSize;
}
protected override Java.Lang.Object DoInBackground(params Java.Lang.Object[] #params)
{
string strUrl = #params[0].ToString();
try
{
return DecodeSampleBitmapFromStream(strUrl, reqWidht, reqHeight);
}
catch (Exception ex)
{
return null;
}
}
protected override void OnPostExecute(Java.Lang.Object result)
{
base.OnPostExecute(result);
if (IsCancelled)
{
result = null;
Log.Debug("TT", "OnPostExecute - Task Cancelled");
}
else
{
Bitmap bmpResult = result as Bitmap;
if (imageViewReference != null && bmpResult != null)
{
ImageView view = imageViewReference.Target as ImageView;
if (view != null)
{
view.SetImageBitmap(bmpResult);
}
}
}
}
public static int CalculateInSampleSize( BitmapFactory.Options options, int reqWidth, int reqHeight)
{
//_____________________________
// Raw height and width of image
int height = options.OutHeight;
int width = options.OutWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth)
{
if (width > height)
{
inSampleSize = (int)Math.Round((float)height / (float)reqHeight);
}
else
{
inSampleSize = (int)Math.Round((float)width / (float)reqWidth);
}
}
return inSampleSize;
}
public static Bitmap DecodeSampleBitmapFromStream(string URL, int reqWidth, int reqHeight)
{
URL url = new URL(URL);
try
{
//______________________________________________________________
// First decode with inJustDecodeBounds=true to check dimensions
BitmapFactory.Options options = new BitmapFactory.Options();
options.InJustDecodeBounds = true;
BitmapFactory.DecodeStream(url.OpenConnection().InputStream, null, options);
//______________________
// Calculate inSampleSize
options.InSampleSize = CalculateInSampleSize(options, reqWidth, reqHeight);
//____________________________________
// Decode bitmap with inSampleSize set
options.InJustDecodeBounds = false;
return BitmapFactory.DecodeStream(url.OpenConnection().InputStream, null, options);
}
catch (Exception ex)
{
return null;
}
finally
{
url.Dispose();
}
}
I am starting this AsyncTask from the Lists GetView() function using this Method:
public void loadBitmap(string url, ImageView imageView)
{
if (Common.cancelPotentialWork(url, imageView))
{
BitmapWorkerTask task = new BitmapWorkerTask(imageView, 2, 80,80);
AsyncDrawable asyncDrawable = new AsyncDrawable(null, null, task);
imageView.SetImageDrawable(asyncDrawable);
task.Execute(url);
}
}
Everything works as expected for a period of time, but if I continuously scroll up and down through my list I eventually start getting OutOfMemoryExceptions and the app crashes. My understanding of how the Android list works is it disposes of the ListItem views as they move off screen, but it feels as though this is not happening!
It feels like all those bitmaps I am decoding as I scroll through the list are for whatever reason being held in memory? What could I be missing here that is preventing those bitmaps from being disposed of? Where could I implement a call to Bitmap.Recycle() to ensure the bitmaps are cleared?
I did a test whereby I made a call to GC.Collect on every call to GetView which did seem to keep my memory usage fairly consistent, but I know this shouldn't be needed and it affects scrolling performance.
Why when I scroll through my list without the call to GC.Collect() am I not seeing those garbage collection message indicating that the system is, in fact, doing routine collections?
Any help is appreciated, I am losing the will to code!
My understanding of how the Android list works is it disposes of the List item views as they move off screen, but It feels as though this is not happening!
This isn't correct.
What Android does is it hold on to the set of item views and it tries to reuse them after they have gone off screen. This is what the convertView parameter is for.
I can't see your Adapter code posted in the question, so I'm not sure what your code is for using the convertView parameter, but I'd guess what it should do in the case of a convertView is:
it should cancel any existing async image fetch/conversion
it should start a new one
The MvvmCross code may be a little too complicated for you as a reference/example here, but you can at least see convertView in use in MvxBindableListAdapter.cs - see protected virtual View GetBindableView(View convertView, object source, int templateId)
Related
First time trying to render something and I have big troubles... I am using DirectN library and SwapChainSurface class from KlearTouch.MediaPlayer. I am trying to render BGRA32 frame using D3D11Device.
For this I have slightly modified OnNewSurfaceAvailable:
public void OnNewSurfaceAvailable2(Action<ID3D11Device, ID3D11DeviceContext> updateSurface)
{
if (rendering)
{
return;
}
try
{
if (this.swapChain is null || swapChainComObject is null)
{
return;
}
swapChainComObject.GetDesc(out var swapChainDesc).ThrowOnError();
if (swapChainDesc.BufferDesc.Width != PanelWidth || swapChainDesc.BufferDesc.Height != PanelHeight)
{
swapChainComObject.ResizeBuffers(2, PanelWidth, PanelHeight, DXGI_FORMAT.DXGI_FORMAT_UNKNOWN, 0).ThrowOnError();
}
var device = swapChain.Object.GetDevice1().Object.As<ID3D11Device>();
device.GetImmediateContext(out var context);
// context.ClearRenderTargetView(renderTargetView.Object, new []{0f, 1f, 1f, 1f});
updateSurface(device, context);
swapChainComObject.Present(1, 0).ThrowOnError();
}
catch (ObjectDisposedException)
{
Reinitialize();
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("\nException: " + ex, nameof(SwapChainSurface) + '.' + nameof(OnNewSurfaceAvailable));
}
rendering = false;
}
OnSurfaceAvailable2 is called from:
void VideoFrameArrived(Bgra32VideoFrame frame)
{
DispatcherQueue.TryEnqueue(() =>
{
previewSurface.OnNewSurfaceAvailable2((device, context) =>
{
var size = frame.m_height * frame.m_height * 4;
D3D11_TEXTURE2D_DESC td;
td.ArraySize = 1;
td.BindFlags = (uint) D3D11_BIND_FLAG.D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE.D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = (uint) D3D11_CPU_ACCESS_FLAG.D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT.DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = (uint) frame.m_height;
td.Width = (uint) frame.m_width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srd;
srd.pSysMem = frame.m_pixelBuffer;
srd.SysMemPitch = (uint) frame.m_height;
srd.SysMemSlicePitch = 0;
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
var mappedResource = context.Map(texture.Object, 0, D3D11_MAP.D3D11_MAP_WRITE_DISCARD);
var mappedData = mappedResource.pData;
unsafe
{
Buffer.MemoryCopy(frame.m_pixelBuffer.ToPointer(), mappedData.ToPointer(), size, size);
}
// Just for debug
var pixelsInFrame = new byte[size];
var pixelsInResource = new byte[size];
Marshal.Copy(frame.m_pixelBuffer, pixelsInFrame, 0, size);
Marshal.Copy(mappedResource.pData, pixelsInResource, 0, size);
context.Unmap(texture.Object, 0);
});
});
}
Problem is that I can't see anything rendered and surface stay black and I assume it should not be.
Update: Project repository
Update 2:
I solved my issue. I had too little knowledge about DX11 so I had to study more how things work there. With this knowledge I updated repository which can display preview from black magic design card. It is just example with many issues so be careful and feel free to look for or inspiration there.
There's a various amount of issues here.
First on frame arrived, you have
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
So your create a texture, but you do not use it anywhere, it needs to be blitted to the swapchain (can do a CopyResource on device context or draw a full screen triangle/quad).
Note that CopyResource will only work if your swapchain has the same size as your incoming texture, which is rather unlikely, so you will have to draw a blit with a shader most likely.
Also you actually copying the data in the texture twice :
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
Since you provide initial data, the content is already there.
also, pitch is incorrect :
srd.SysMemPitch = (uint) frame.m_height;
pitch is the length (in bytes) of a line, so it should be :
srd.SysMemPitch = frame.GetRowBytes();
Please also note that in case of a non converted Decklink frame,
GetRowBytes can be different from width*4 (they can align row size to multiple of 16/32 or other values).
Next, in the case of resource map, the following is also incorrect :
unsafe
{
Buffer.MemoryCopy(frame.m_pixelBuffer.ToPointer(), mappedData.ToPointer(), size, size);
}
You are not checking the pitch/stride requirement of a texture (which can be different as well),
so you need to do :
if (mappedResource.RowPitch == frame.GetRowBytes())
{
//here you can use a direct copy as above
}
else
{
//here you need to copy data line per line
}
Setup
Hey,
I'm trying to capture my screen and send/communicate the stream via MR-WebRTC. Communication between two PCs or PC with HoloLens worked with webcams for me, so I thought the next step could be streaming my screen. So I took the uwp application that I already had, which worked with my webcam and tried to make things work:
UWP App is based on the example uwp app from MR-WebRTC.
For Capturing I'm using the instruction from MS about screen capturing via GraphicsCapturePicker.
So now I'm stuck in the following situation:
I get a frame from the screen capturing, but its type is Direct3D11CaptureFrame. You can see it below in the code snipped.
MR-WebRTC takes a frame type I420AVideoFrame (also in a code snipped).
How can I "connect" them?
I420AVideoFrame wants a frame in the I420A format (YUV 4:2:0).
Configuring the framePool I can set the DirectXPixelFormat, but it has no YUV420.
I found this post on so, saying that it its possible.
Code Snipped Frame from Direct3D:
_framePool = Direct3D11CaptureFramePool.Create(
_canvasDevice, // D3D device
DirectXPixelFormat.B8G8R8A8UIntNormalized, // Pixel format
3, // Number of frames
_item.Size); // Size of the buffers
_session = _framePool.CreateCaptureSession(_item);
_session.StartCapture();
_framePool.FrameArrived += (s, a) =>
{
using (var frame = _framePool.TryGetNextFrame())
{
// Here I would take the Frame and call the MR-WebRTC method LocalI420AFrameReady
}
};
Code Snippet Frame from WebRTC:
// This is the way with the webcam; so LocalI420 was subscribed to
// the event I420AVideoFrameReady and got the frame from there
_webcamSource = await DeviceVideoTrackSource.CreateAsync();
_webcamSource.I420AVideoFrameReady += LocalI420AFrameReady;
// enqueueing the newly captured video frames into the bridge,
// which will later deliver them when the Media Foundation
// playback pipeline requests them.
private void LocalI420AFrameReady(I420AVideoFrame frame)
{
lock (_localVideoLock)
{
if (!_localVideoPlaying)
{
_localVideoPlaying = true;
// Capture the resolution into local variable useable from the lambda below
uint width = frame.width;
uint height = frame.height;
// Defer UI-related work to the main UI thread
RunOnMainThread(() =>
{
// Bridge the local video track with the local media player UI
int framerate = 30; // assumed, for lack of an actual value
_localVideoSource = CreateI420VideoStreamSource(
width, height, framerate);
var localVideoPlayer = new MediaPlayer();
localVideoPlayer.Source = MediaSource.CreateFromMediaStreamSource(
_localVideoSource);
localVideoPlayerElement.SetMediaPlayer(localVideoPlayer);
localVideoPlayer.Play();
});
}
}
// Enqueue the incoming frame into the video bridge; the media player will
// later dequeue it as soon as it's ready.
_localVideoBridge.HandleIncomingVideoFrame(frame);
}
I found a solution for my problem by creating an issue on the github repo. Answer was provided by KarthikRichie:
You have to use the ExternalVideoTrackSource
You can convert from the Direct3D11CaptureFrame to Argb32VideoFrame
// Setting up external video track source
_screenshareSource = ExternalVideoTrackSource.CreateFromArgb32Callback(FrameCallback);
struct WebRTCFrameData
{
public IntPtr Data;
public uint Height;
public uint Width;
public int Stride;
}
public void FrameCallback(in FrameRequest frameRequest)
{
try
{
if (FramePool != null)
{
using (Direct3D11CaptureFrame _currentFrame = FramePool.TryGetNextFrame())
{
if (_currentFrame != null)
{
WebRTCFrameData webRTCFrameData = ProcessBitmap(_currentFrame.Surface).Result;
frameRequest.CompleteRequest(new Argb32VideoFrame()
{
data = webRTCFrameData.Data,
height = webRTCFrameData.Height,
width = webRTCFrameData.Width,
stride = webRTCFrameData.Stride
});
}
}
}
}
catch (Exception ex)
{
}
}
private async Task<WebRTCFrameData> ProcessBitmap(IDirect3DSurface surface)
{
SoftwareBitmap softwareBitmap = await SoftwareBitmap.CreateCopyFromSurfaceAsync(surface, Windows.Graphics.Imaging.BitmapAlphaMode.Straight);
byte[] imageBytes = new byte[4 * softwareBitmap.PixelWidth * softwareBitmap.PixelHeight];
softwareBitmap.CopyToBuffer(imageBytes.AsBuffer());
WebRTCFrameData argb32VideoFrame = new WebRTCFrameData();
argb32VideoFrame.Data = GetByteIntPtr(imageBytes);
argb32VideoFrame.Height = (uint)softwareBitmap.PixelHeight;
argb32VideoFrame.Width = (uint)softwareBitmap.PixelWidth;
var test = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read);
int count = test.GetPlaneCount();
var pl = test.GetPlaneDescription(count - 1);
argb32VideoFrame.Stride = pl.Stride;
return argb32VideoFrame;
}
private IntPtr GetByteIntPtr(byte[] byteArr)
{
IntPtr intPtr2 = System.Runtime.InteropServices.Marshal.UnsafeAddrOfPinnedArrayElement(byteArr, 0);
return intPtr2;
}
I am attempting to make a Universal Windows Platform app in C# and I have spent the last week mostly on attempting to get a byte array of a Canvas Bitmap object. At first I thought I could use the Canvas Bitmap function byteArray = pictureBitmap.GetPixelBytes where pictureBitmap is the Canvas Bitmap object that has a loaded image in it.
I did some debugging and am pretty sure pictureBitmap has an image saved in it as a Canvas Bitmap type, however, trying to get the image into a Byte[] is a real challenge and GetPixelBytes does not return the header information and only outputs a .bmp so I can't really use that.
After that I tried implementing my own IRandomAccessStreaminterface as well as following along with a tutorial, however, no matter what the following code only outputs exactly 2^16 bytes on the second debug output, not the whole image.
using (var randomStream = new ImageStream(1000000))
{
Debug.WriteLine("randomStream Initial Length: " + randomStream.Size);
await pictureBitmap.SaveAsync(randomStream, CanvasBitmapFileFormat.Jpeg, 0.8f);
Debug.WriteLine("randomStream After Length: " + randomStream.Size);
}
For the implementation of the IRandomAccessStream interface I tried both Stream and MemoryStreamalthough both only output a 65536 bytes. Any help is greatly appreciated, thanks.
EDIT
This is my code for the ImageStream class which implements the interface IRandomAcessStream. If I had to guess where the problem is I believe it could either be in FlushAsync(), ReadAsync(IBuffer buffer, uint count, InputStreamOptions options), or Seek(ulong position). I know that when I save the image to a file it is the correct size and format, it is just when saving to a Stream that I seem to have trouble.
class ImageStream : IRandomAccessStream
{
private MemoryStream internalImageStream;
public ImageStream()
{
internalImageStream = new MemoryStream();
}
public ImageStream(int size)
{
internalImageStream = new MemoryStream(size);
}
public byte[] ConvertToArray()
{
return this.internalImageStream.ToArray();
}
public int Capacity
{
get { return this.internalImageStream.Capacity; }
set { this.internalImageStream.Capacity = (int)value; }
}
public bool CanRead
{
get { return true; }
}
public bool CanWrite
{
get { return true; }
}
public ulong Position
{
get { return (ulong)this.internalImageStream.Position; }
set { this.internalImageStream.Position = (long)value; }
}
public ulong Size
{
get { return (ulong)this.internalImageStream.Length; }
set { this.internalImageStream.SetLength((long)value); }
}
public IRandomAccessStream CloneStream()
{
ImageStream newImageStream = new ImageStream();
newImageStream.internalImageStream = this.internalImageStream;
return newImageStream;
}
public void Dispose()
{
this.internalImageStream.Dispose();
}
public IAsyncOperation<bool> FlushAsync()
{
var outputStream = this.GetOutputStreamAt(0);
return outputStream.FlushAsync();
}
public IInputStream GetInputStreamAt(ulong position)
{
this.internalImageStream.Seek((long)position, SeekOrigin.Begin);
return this.internalImageStream.AsInputStream();
}
public IOutputStream GetOutputStreamAt(ulong position)
{
this.internalImageStream.Seek((long)position, SeekOrigin.Begin);
return this.internalImageStream.AsOutputStream();
}
public void Seek(ulong position)
{
this.internalImageStream.Seek((long)position, 0);
}
public IAsyncOperationWithProgress<IBuffer,uint> ReadAsync(IBuffer buffer, uint count, InputStreamOptions options)
{
var inputStream = this.GetInputStreamAt(0);
return inputStream.ReadAsync(buffer, count, options);
}
public IAsyncOperationWithProgress<uint,uint> WriteAsync(IBuffer buffer)
{
var outputStream = this.GetOutputStreamAt(0);
return outputStream.WriteAsync(buffer);
}
}
The problem appears in the ImageStream.WriteAsync method.
This method will be called multiple times during stream writing (just like moving things, you need to move several times if you can't move at one time). But in this method, you always write from 0, which means that when you write the second time, it will overwrite the previous data.
Try this:
public IAsyncOperationWithProgress<uint, uint> WriteAsync(IBuffer buffer)
{
var outputStream = this.GetOutputStreamAt(this.Size);
return outputStream.WriteAsync(buffer);
}
This can ensure that the call will continue to write from the place where it last ended, and you will get the correct result.
Thanks.
I built out this whole app thinking that the garbage collector handled memory clean-up just fine, which was incredibly stupid and naive of me, but hey, it was my first time every using Xamarin to build an app, and my first time ever building an app, so what's a guy to do? Every screen seems to leak memory, but the screens that leak the most are screens that have bitmaps, generating a memory dump and analyzing it in MAT, I found the following:
So there are 4 potential culprits, 2 are bitmaps, 2 are byte arrays. This is a heap dump for the main menu of the app, if I go into my list view activity for listing out elements, I get 5 potential leaks from bitmaps. Here is the code for the activity:
AssetManager assets = Assets;
Window.AddFlags(WindowManagerFlags.DrawsSystemBarBackgrounds);
var topPanel = FindViewById<TextView>(Resource.Id.topPanel);
topPanel.Text = service.GetLanguageValue("use recommendations - top bar heading");
topPanel.Dispose();
var lowerPanel = FindViewById<TextView>(Resource.Id.recommendationsPanel);
lowerPanel.Text = service.GetLanguageValue("title upper - recommendations by variety");
Shared.ScaleTextToOneLine(lowerPanel, lowerPanel.Text, Shared.ScaleFloatToDensityPixels(Shared.GetViewportWidthInDp()), 1.0f);
lowerPanel.Dispose();
// Read html file and replace it's contents with apple data
string html = "";
using (StreamReader sr = new StreamReader(Assets.Open("apple-variety-detail.html")))
{
html = sr.ReadToEnd();
}
html = ReplaceAppleDetailsHtml(html);
var webview = FindViewById<WebView>(Resource.Id.recommendationsMessage);
CleanWebView();
webview.LoadDataWithBaseURL("file:///android_asset/",
html,
"text/html", "UTF-8", null);
if (Shared.currentApple != null)
{
// Setup apple image
using (var imageView = FindViewById<ImageView>(Resource.Id.recommendationsImage))
{
var apple = this.apples.Where(a => a.Id == Shared.currentApple.AppleId).Select(a => a).First();
var imgName = apple.Identifier.First().ToString().ToUpper() + apple.Identifier.Substring(1);
var fullImageName = "SF_" + imgName;
using (var bitmap = Shared.decodeSampledBitmapFromResource(ApplicationContext.Resources,
Resources.GetIdentifier(fullImageName.ToLower(), "drawable", PackageName),
200, 200))
{
imageView.SetImageBitmap(bitmap);
}
}
// Setup apple name
FindViewById<TextView>(Resource.Id.appleNameTextView).Text = Shared.currentApple.Name;
}
else
{
FindViewById<TextView>(Resource.Id.appleNameTextView).Text = "Not Found!";
}
// Setup list menu for apples
AppleListView = FindViewById<ListView>(Resource.Id.ApplesListMenu);
// Scale details and list to fit on the same screen if the screen size permits
if (Shared.GetViewportWidthInDp() >= Shared.minPhoneLandscapeWidth)
{
var listViewParams = AppleListView.LayoutParameters;
// Scales list view to a set width
listViewParams.Width = Shared.ScaleFloatToDensityPixels(240);
listViewParams.Height = Shared.ScaleFloatToDensityPixels(Shared.GetViewportHeightInDp());
AppleListView.LayoutParameters = listViewParams;
}
else
{
// Here, we either need to hide the list view if an apple was selected,
// or set it to be 100% of the screen if it wasn't selected.
if(!Shared.appleSelected)
{
var listViewParams = AppleListView.LayoutParameters;
// Scales list view to a set width
listViewParams.Width = Shared.ScaleFloatToDensityPixels(Shared.GetViewportWidthInDp());
listViewParams.Height = Shared.ScaleFloatToDensityPixels(Shared.GetViewportHeightInDp());
AppleListView.LayoutParameters = listViewParams;
}
else
{
var listViewParams = AppleListView.LayoutParameters;
// Scales list view to a set width
listViewParams.Width = Shared.ScaleFloatToDensityPixels(0);
listViewParams.Height = Shared.ScaleFloatToDensityPixels(Shared.GetViewportHeightInDp());
AppleListView.LayoutParameters = listViewParams;
}
}
// Set listview adapter
if(AppleListView.Adapter == null)
{
AppleListView.Adapter = new Adapters.AppleListAdapter(this, (List<Apple>)apples, this);
}
AppleListView.FastScrollEnabled = true;
// Set the currently active view for the slide menu
var frag = (SlideMenuFragment)FragmentManager.FindFragmentById<SlideMenuFragment>(Resource.Id.SlideMenuFragment);
frag.SetSelectedLink(FindViewById<TextView>(Resource.Id.SlideMenuRecommendations));
// Replace fonts for entire view
Typeface tf = Typeface.CreateFromAsset(assets, "fonts/MuseoSansRounded-300.otf");
FontCrawler fc = new FontCrawler(tf);
fc.replaceFonts((ViewGroup)this.FindViewById(Android.Resource.Id.recommendationsRootLayout));
tf.Dispose();
}
The important part to note about this is the way this activity works is it loads an adapter, and when it displays it shows a list of items, when an item is clicked, it reloads this same activity, and it computes the screen size, shrinks down the list to show only the webview off to the side, and displays details about the item, thus simulating 2 screens, the reason I did this is because when the screen size is larger, it needs to show all of this as one single view, so on larger screens it will actually show both the listview and the webview, but still reload the activity to load new data.
The adapter code is probably what is giving me a hard time, but I'm not sure, I've tried quite a few things, but nothing seems to help, here's the adapter code:
public class AppleListAdapter : BaseAdapter<Apple>
{
List<Apple> items;
Activity context;
ApplicationService service = AgroFreshApp.Current.ApplicationService;
private Context appContext;
private Typeface tf;
static AppleRowViewHolder holder = null;
public AppleListAdapter(Activity context, List<Apple> items, Context appContext): base ()
{
this.context = context;
this.items = items;
this.appContext = appContext;
context.FindViewById<ListView>(Resource.Id.ApplesListMenu).ChoiceMode = ChoiceMode.Single;
tf = Typeface.CreateFromAsset(context.Assets, "fonts/MuseoSansRounded-300.otf");
}
public override long GetItemId(int position)
{
return position;
}
public override Apple this[int position]
{
get { return items[position]; }
}
public override int Count
{
get
{
return items.Count;
}
}
public override View GetView(int position, View convertView, ViewGroup parent)
{
var item = items[position];
var view = convertView;
var imgName = item.Identifier.First().ToString().ToUpper() + item.Identifier.Substring(1);
var fullImageName = "SF_" + imgName;
if (view == null)
{
view = context.LayoutInflater.Inflate(Resource.Layout.appleRowView, null);
}
if (view != null)
{
holder = view.Tag as AppleRowViewHolder;
}
if(holder == null)
{
holder = new AppleRowViewHolder();
view = context.LayoutInflater.Inflate(Resource.Layout.appleRowView, null);
holder.AppleImage = view.FindViewById<ImageView>(Resource.Id.iconImageView);
holder.AppleName = view.FindViewById<TextView>(Resource.Id.nameTextView);
view.Tag = holder;
}
using (var bitmap = Shared.decodeSampledBitmapFromResource(context.Resources,
context.Resources.GetIdentifier(fullImageName.ToLower(), "drawable", context.PackageName),
25, 25))
{
holder.AppleImage.SetImageBitmap(bitmap);
}
holder.AppleName.Text = AgroFreshApp.Current.AppleDetailManager.GetAll().Where(a => a.AppleId == item.Id).Select(a => a.Name).FirstOrDefault();
holder.AppleName.SetTypeface(tf, TypefaceStyle.Normal);
view.Click += (object sender, EventArgs e) =>
{
var apple = AgroFreshApp.Current.AppleManager.Get(item.Id);
Shared.currentApple = AgroFreshApp.Current.AppleDetailManager.GetAll().Where(a=>a.AppleId == item.Id && a.LanguageId == service.UserSettings.LanguageId).Select(a=>a).FirstOrDefault();
Shared.appleSelected = true;
Intent intent = new Intent(appContext, typeof(RecommendationsActivity));
intent.SetFlags(flags: ActivityFlags.NoHistory | ActivityFlags.NewTask);
appContext.StartActivity(intent);
};
return view;
}
}
So I'm using the viewholder pattern here, and assigning click events to each list item as they get generated, with nohistory and newtask as the intent flags so that the pages refreshes properly. To clean up the bitmaps, I have been using these two methods:
This cleans the large image on the details webview:
public void CleanBitmap()
{
// Clean recommendations bitmap
ImageView imageView = (ImageView)FindViewById(Resource.Id.recommendationsImage);
Drawable drawable = imageView.Drawable;
if (drawable is BitmapDrawable)
{
BitmapDrawable bitmapDrawable = (BitmapDrawable)drawable;
if (bitmapDrawable.Bitmap != null)
{
Bitmap bitmap = bitmapDrawable.Bitmap;
if (!bitmap.IsRecycled)
{
imageView.SetImageBitmap(null);
bitmap.Recycle();
bitmap = null;
}
}
}
Java.Lang.JavaSystem.Gc();
}
And this cleans the bitmaps stored in each listview item:
public void CleanListViewBitmaps()
{
var parent = FindViewById<ListView>(Resource.Id.ApplesListMenu);
// Clean listview bitmaps
for (int i = 0; i < parent.ChildCount; i++)
{
var tempView = parent.GetChildAt(i);
// If the tag is null, this no longer holds a reference to the view, so
// just leave it.
if(tempView.Tag != null)
{
AppleRowViewHolder tempHolder = (AppleRowViewHolder)tempView.Tag;
var imageView = tempHolder.AppleImage;
var drawable = imageView.Drawable;
if (drawable is BitmapDrawable)
{
BitmapDrawable bitmapDrawable = (BitmapDrawable)drawable;
if (bitmapDrawable.Bitmap != null)
{
Bitmap bitmap = bitmapDrawable.Bitmap;
if (!bitmap.IsRecycled)
{
imageView.SetImageBitmap(null);
bitmap.Recycle();
bitmap = null;
}
}
}
}
}
Java.Lang.JavaSystem.Gc();
}
They then get called in the activities ondestroy method like so:
protected override void OnDestroy()
{
base.OnDestroy();
CleanBitmap();
CleanListViewBitmaps();
Shared.appleSelected = false;
}
I'm also using a shared class with static variables to essentially track view states like if something was selected or no, but it only stores primitives, it doesn't store any view objects or anything like that, so I don't think that is the problem like I said it looks like bitmaps aren't getting cleaned correctly, and it seems to happen on every view, but this one in particular is bad.
I also on each view load 2 fragments, one is a slide menu fragment in a frame layout, and the other is a navbar fragment that just holds 2 bitmaps for a logo and menu handle, so those could be culprits too I suppose. Here's the navbar fragment:
public override View OnCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState)
{
// Use this to return your custom view for this Fragment
// return inflater.Inflate(Resource.Layout.YourFragment, container, false);
var view = inflater.Inflate(Resource.Layout.navbar, container, false);
var navLogo = view.FindViewById(Resource.Id.navbarLogo);
var menuHandle = view.FindViewById(Resource.Id.menuHandle);
var navSpacer = view.FindViewById(Resource.Id.navSpacer);
((ImageButton)(menuHandle)).SetMaxWidth(Shared.GenerateProportionalWidth(.25f, 50));
((ImageButton)(menuHandle)).SetMaxHeight(Shared.GenerateProportionalHeight(.25f, 50));
((ImageButton)(menuHandle)).Click += (object sender, EventArgs e) =>
{
var slideMenu = FragmentManager.FindFragmentById(Resource.Id.SlideMenuFragment);
if (slideMenu.IsHidden)
{
FragmentManager.BeginTransaction().Show(slideMenu).Commit();
}
else if (!slideMenu.IsHidden)
{
FragmentManager.BeginTransaction().Hide(slideMenu).Commit();
}
};
var navLogoParams = navLogo.LayoutParameters;
// Account for the padding offset of the handle to center logo truly in the center of the screen
navLogoParams.Width = global::Android.Content.Res.Resources.System.DisplayMetrics.WidthPixels - (((ImageButton)(menuHandle)).MaxWidth * 2);
navLogoParams.Height = (Shared.GenerateProportionalHeight(.25f, 30));
navLogo.LayoutParameters = navLogoParams;
// Spacer puts the logo in the middle of the screen, by making it's size the same as the handle on the opposite side to force-center the logo
((Button)(navSpacer)).SetMaxWidth(Shared.GenerateProportionalWidth(.25f, 50));
((Button)(navSpacer)).SetMaxHeight(Shared.GenerateProportionalHeight(.25f, 50));
return view;
}
Does anyone see any obvious or stupid mistake that I'm making? I feel like it has to just be sheer inexperience that's causing me to miss something really obvious, or I'm doing something completely wrong, either way.
EDIT #1:
1 of the bitmaps leaking was the menu handle button in the navigation fragment, so that drops the leak down from 300kb to 200kb, but I still need to figure out how to clean it properly.
EDIT #2:
Here is my code that scales bitmaps down
public static Bitmap decodeSampledBitmapFromResource(Resources res, int resId,
int reqWidth, int reqHeight)
{
// First decode with inJustDecodeBounds=true to check dimensions
BitmapFactory.Options options = new BitmapFactory.Options();
options.InJustDecodeBounds = true;
BitmapFactory.DecodeResource(res, resId, options);
// Calculate inSampleSize
options.InSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.InJustDecodeBounds = false;
return BitmapFactory.DecodeResource(res, resId, options);
}
public static int calculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight)
{
// Raw height and width of image
int height = options.OutHeight;
int width = options.OutWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth)
{
int halfHeight = height / 2;
int halfWidth = width / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) >= reqHeight
&& (halfWidth / inSampleSize) >= reqWidth)
{
inSampleSize *= 2;
}
}
return inSampleSize;
}
For anyone wondering, I've figured out the problem. Xamarin is a c# wrapper around native java, so at runtime there is the native Java runtime, and the mono runtime as well, so any object like a bitmap that you want to cleanup, you need to cleanup the native Java object, but you also need to clean up the c# handle to the native object, because what happens is the garbage collector goes to see if it should clean your resource, sees a handle associated with the resource, and moves on. My solution was to call the c# dispose after I cleaned up the native Java object, and then call both the c# and Java garbage collector, I'm not sure if calling both garbage collectors is explicitly needed, but I chose to do it anyway. Seriously hope this helps someone out, I do not envy people who have to hunt down these problems.
Sometimes Bitmaps ar not garbage collected correctly, and generete the outofmemory exception.
my suggestion if you're working with bitmaps is to call
System.gc();
to recycle bitmaps from memory correctly
I have a function called: DisplayAndSaveImageFromByteArray.
by its name you probably understand what i am trying to do. In the bytearray the values are pixeldata. so like this 255,220,130,0, etc..
The size of the byte is the Width and the Height of the image times 4.
because it works with strides.
public void DisplayAndSaveImageFromByteArray(byte[] byteArray)
{
try{
byte[] data = new byte[width * height * 4];
int o = 0;
for (int io = 0; io < width * height; io++){
byte value = byteArray[io];
data[o++] = value;
data[o++] = value;
data[o++] = value;
data[o++] = 0;
}
unsafe
{
fixed (byte* ptr = data)
{
using (image = new Bitmap((int)width, (int)height, (int)width * 4,System.Drawing.Imaging.PixelFormat.Format32bppRgb, new IntPtr(ptr)))
{
image.Save(#"c:\\testmap\" + nextpicture + ".jpg", ImageFormat.Jpeg);
if (nextpicture >= 10)
{
pbCameraPreview.BeginInvoke(new InvokeDelegate(InvokeMethod));
}
nextpicture++;
}
}
}
}
catch (Exception ex){
MessageBox.Show(ex.ToString());
}
}
When i run this code it will work, but only if all values are the same for example: White (255,255,255) or Black (0,0,0).
it is able to deviate about 5 up and down in the RGB(A) values until it will stop working.
But as soon as the color of the image changes it will stop working without giving me an Exception of anything.
The only error/Exception i will get it if i leave it on for a minute and the VS will recognize that the code being executed is not doing anything and will give me a warning. > ContextSwitchDeadlock
What did i do wrong for it to crash?
and what is the solution for it?
for some reason it wont let me put on the using and namespace name...
(updated)Complete code:
public Form1()
{
InitializeComponent();
pbCameraPreview.Image = defImg;
}
#region Global
int ii;
object __p1;
EventArgs __p2;
string path;
Image defImg = Image.FromFile(#"c:\\testimg\def.jpg");
UInt32 width;
UInt32 height;
int nextpicture = 0;
FGNodeInfoContainer InfoContainer = new FGNodeInfoContainer();
FGNodeInfo NodeInfo = new FGNodeInfo();
FireWrap_CtrlCenter CtrlCenter;
enFireWrapResult Result;
UInt32 XSize = new UInt32();
UInt32 YSize = new UInt32();
UInt32 NodeCnt;
public delegate void InvokeDelegate();
Bitmap image;
CameraCode Cam = new CameraCode();
FGFrame Frame = new FGFrame();
FGUIntHL Guid = new FGUIntHL();
#endregion
public void CheckDirectory()
{
path = #"c:\\testmap\" + ii + "\\";
if (Directory.Exists(#"c:\\testmap\") == false)
{
Directory.CreateDirectory(#"c:\\testmap\");
}
if (Directory.Exists(path))
{
if (File.Exists(path + "0.Jpeg"))
{
ii++;
CheckDirectory();
}
}
else
{
Directory.CreateDirectory(path);
}
}
//Haal de images op
/// <param name="__p1"></param>
/// <param name="__p2"></param>
public void btStart_Click(object sender, EventArgs e)
{
Debug.WriteLine("btStart_Click is clicked");
// Init module
CtrlCenter = FireWrap_CtrlCenter.GetInstance();
Result = CtrlCenter.FGInitModule();
// Register frame start event
CtrlCenter.OnFrameReady += new FireWrap_CtrlCenter.FireWrapEvent(OnFrameReady);
// Get list of connected nodes
if (Result == enFireWrapResult.E_NOERROR)
{
Result = InfoContainer.FGGetNodeList();
NodeCnt = InfoContainer.Size();
// Print Nodecnt
Console.WriteLine(NodeCnt.ToString() + " camera found");
// Connect with first node
InfoContainer.GetAt(NodeInfo, 0);
Result = Cam.Connect(NodeInfo.Guid);
if (Result == enFireWrapResult.E_NOERROR)
{
Cam.m_Guid = NodeInfo.Guid;
}
// Set Format7 Mode0 Y8
if (Result == enFireWrapResult.E_NOERROR)
{
Result = Cam.SetParameter(enFGParameter.E_IMAGEFORMAT,
(uint)(((uint)enFGResolution.E_RES_SCALABLE << 16) |
((uint)enColorMode.E_CCOLORMODE_Y8 << 8) |
0));
}
if (Result != enFireWrapResult.E_NOERROR)
{
Result = Cam.SetParameter(enFGParameter.E_IMAGEFORMAT,
(uint)(((uint)enFGResolution.E_RES_SCALABLE << 16) |
((uint)enColorMode.E_CCOLORMODE_Y8 << 8) |
1));
}
// Start DMA logic
if (Result == enFireWrapResult.E_NOERROR)
Result = Cam.OpenCapture();
// Print device settings
Result = Cam.GetParameter(enFGParameter.E_XSIZE, ref XSize);
Result = Cam.GetParameter(enFGParameter.E_YSIZE, ref YSize);
Debug.WriteLine(Cam.DeviceAll + " [" + Cam.m_Guid.Low.ToString() + "] " + XSize + "x" + YSize);
width = XSize;
height = YSize;
// Start camera
if (Result == enFireWrapResult.E_NOERROR)
{
Result = Cam.StartDevice();
}
}
}
public void btStop_Click(object sender, EventArgs e)
{
// Stop the device
Cam.StopDevice();
// Close capture
Cam.CloseCapture();
// Disconnect before ExitModule
Cam.Disconnect();
// Exit module
CtrlCenter.FGExitModule();
}
/// <param name="__p1"></param>
/// <param name="__p2"></param>
public void OnFrameReady(object __p1, EventArgs __p2)
{
Debug.WriteLine("OnFrameReady is called");
FGEventArgs args = (FGEventArgs)__p2;
Guid.High = args.High;
Guid.Low = args.Low;
if (Guid.Low == Cam.m_Guid.Low)
{
Result = Cam.GetFrame(Frame, 0);
// Process frame, skip FrameStart notification
if (Result == enFireWrapResult.E_NOERROR & Frame.Length > 0)
{
byte[] data = new byte[Frame.Length];
// Access to frame data
if (Frame.CloneData(data))
{
DisplayAndSaveImageFromByteArray(data);
// Here you can start your image processsing logic on data
string debug = String.Format("[{6}] Frame #{0} length:{1}byte [ {2} {3} {4} {5} ... ]",
Frame.Id, Frame.Length, data[0], data[1], data[2], data[3], Cam.m_Guid.Low);
Debug.WriteLine(debug);
}
// Return frame to module as fast as posible after this the Frame is not valid
Result = Cam.PutFrame(Frame);
}
}
}
public void DisplayAndSaveImageFromByteArray(byte[] byteArray)
{
try{
byte[] data = new byte[width * height * 4];
int o = 0;
for (int io = 0; io < width * height; io++){
byte value = byteArray[io];
data[o++] = value;
data[o++] = value;
data[o++] = value;
data[o++] = 0;
}
unsafe
{
fixed (byte* ptr = data)
{
using (image = new Bitmap((int)width, (int)height, (int)width * 4,System.Drawing.Imaging.PixelFormat.Format32bppRgb, new IntPtr(ptr)))
{
image.Save(#"c:\\testmap\" + nextpicture + ".jpg", ImageFormat.Jpeg);
if (nextpicture >= 10)
{
pbCameraPreview.BeginInvoke(new InvokeDelegate(InvokeMethod));
}
nextpicture++;
}
}
}
}
catch (Exception ex){
MessageBox.Show(ex.ToString());
}
}
public void InvokeMethod()
{
pbCameraPreview.Image = Image.FromFile(#"c:\\testmap\" + (nextpicture -10) + ".jpg");
}
}
public class CameraCode : FireWrap_Camera
{
public FGUIntHL m_Guid;
}}
Threads running:
I recorded it for extra information:
https://www.youtube.com/watch?v=i3TxWRyZaIU
I'm not 100% sure that I have understood your problem, since it is not very clear the format of the input array and how do you have to format it before parsing it into the Bitmap variable... But here we go, I hope these tips can help you. If they don't, please try to provide some extra details on what you are trying to do.
First of all, if I have understood well, you should increase "io" and update the variable "value" each time you assign it to data[o++] in the main loop, otherwise you are assigning the same value to R, G and B pixels, which will always result in a shade of gray.
Secondly, I see a couple things in your code that are not very .net-ly... .Net provides already ways to load an image from a byte array, using Memory Streams and stuff. Take a look at How to create bitmap from byte array?
And be sure to indicate the proper format of your byte array image when instantiating the Bitmap or Image --> https://msdn.microsoft.com/en-us/library/system.drawing.imaging.pixelformat(v=vs.110).aspx
Regards.
If you have trouble posting your code in StackOverflow, make sure each line is indented by at least 4 spaces (mark the code in Visual Studio, press Tab, then copy it) and separated by any other paragraphs by at least one empty line. Alternatively add a line of three backticks (`) at the beginning and and of the code.
When looking at your code, it seems to me that you are using a third-party control inside a Windows Form and try to bind the event of it to one of your Windows Forms event handlers. In that case you have to be aware that Windows Forms expect that all events are handled single-threaded (in the Event Dispatch Thread), so you should check (Debug.WriteLine) in your OnFrameReady method if InvokeRequired is true and if yes you have to take a few precautions:
Never access any of the Form's internal members (like pbCameraPreview) without wrapping the call into Invoke or BeginInvoke. Keep in mind that every Invoke call will effectively block until the single event-dispatch thread is available, so it will cost you a lot of performance to do invoke synchronously.
When accessing your own members (like width, height or nextpicture), make sure you use appropriate locking to avoid situations where one thread/callback changes the value in a situation where you don't expect it. In particular, since you have multiple cams but only a single width/height variable, if the resolutions differ, one camera callback could change the width while the other camera callback has just allocated the byte array but before passing the width to the Bitmap constructor, most likely resulting in a hang, crash, or memory access violation. Avoid this by passing width and heigth as variables into your method instead of using the global ones.
Your form's event handler (btStart_Click) contains a busy-wait loop. This has multiple problems:
As stated before, every Form event handler runs in the same thread (and every Invoke as well). So as long as your loop is busy-waiting, no other events can be handled and the UI will be completely frozen. If code uses Invoke, that code will also eventually have to wait and be blocked, too.
Busy wait loops without any sleep code in it will cause your processor to run at the full cpu speed and it will eat 100% of that core, causing battery drain on notebooks and high power consumption and probably loud fans on other machines. Just don't do that.
In your concrete example, just remove the loop and move everything after the loop into the stop button instead. To avoid the problems with Windows Forms and invoking, I'd suggest to put all the image handling logic into a separate class that does its own locking (each instance of the class handling one camera only), and just call into it to start the process. You might even prefer to do your app as a Console application first to avoid any event-dispatching issues and later convert it to a Windows Forms application once it is working as you desire.