C# Mobile - Memory warning (clear memory) - c#

I'm currently working on an application which runs on Windows Mobile 6.1 (not WP). I built an application which synchronizes data from a remote server multiple times a day. But somehow it looks like this data is "remembered" after finishing. Task Manager shows that about 3MB is used at a regular start of the application, which increases with about 2MB everytime I run the synchronization. After multiple times I get a warning of the memory usage and I have to reset the device or restart the program.
What I'm looking for is some way to clear data after synchronization, a kind of garbage collector. In (regular) C# I've found Collect(), but I can't get this working in C# mobile.
Below is my code, which is working correctly, except at a certain point I get the message "Geheugentekort" ("Memory shortage").
Probably after the for{} code, I have to empty variables like doc, root, and the XmlNodeList, but the question is how...
My device: Pidion BIP-5000
OS: Windows Mobile 6.1
XmlDocument doc = new XmlDocument();
doc.Load(xmlUrl);
XmlElement root = doc.DocumentElement;
try
{
totaal = Int32.Parse(doc.GetElementsByTagName("Totaal")[0].InnerText.ToString());
// Create lists with values
XmlNodeList namen = doc.GetElementsByTagName("naam");
XmlNodeList ptypen = doc.GetElementsByTagName("ptype");
XmlNodeList ids = doc.GetElementsByTagName("id");
// Door het totaal heen itereren
for (int i = 0; i < totaal; i++)
{
// Create variables of it
int id = Int32.Parse(ids[i].InnerText.ToString());
int ptype = Int32.Parse(ptypen[i].InnerText.ToString());
string naam = namen[i].InnerText.ToString();
// Check if ID exists
int tot = this.tbl_klantTableAdapter.GetData(id).Count;
if (tot == 0)
{
// New item, add
this.tbl_klantTableAdapter.Insert(naam, ptype, id);
}
else
{
// Existing, update
this.tbl_klantTableAdapter.Update(naam, ptype, id);
}
}
}
catch
{
// Rest of code

Dispose Your nodelists after the loop may help
System.Xml.XmlNodeList tempNodelist = Your stuff;
IDisposable disposeMe = tempNodelist as IDisposable;
if (disposeMe != null)
{
disposeMe.Dispose();
}

XmlNodeList implements IDisposable, so you can call namen.Dispose() (also for the other XmlNodeList objects) to force the objects to be discarded and cleaned up.

Yes, you definitely should use the XML stuff locally and dispose after using the XML stuff. The xml stuff seems to ocupie large memory blocks.
You should use nameX.Dispose() and nameX=null to free up the memory used for these temporary xml objects.
You may use GC.Collect() to force memory collection: http://blogs.msdn.com/b/stevenpr/archive/2004/07/26/197254.aspx.
You may also use remote .Net performance viewer to get insides on memory usage: http://blogs.msdn.com/b/stevenpr/archive/2006/04/17/577636.aspx
If your app is consuming much memory before calling into the sync task, you may consider of creating a new application with a separate process for the sync stuff. You can also free up memory for your process when you move functions to a library. WM6.1 and higher have a new memory slot for compact Framework libraries, so the main process memory slot is not lowered: http://blogs.msdn.com/b/robtiffany/archive/2009/04/09/memmaker-for-the-net-compact-framework.aspx
If you need more help you should provide more details/code.

Related

RDLC Memory Leak

In my application (.NET Framework 4.5) I'm rendering some RDLC reports (50-60) in order to export them to a single PDF.
Unfortunately there seems to be a big memory leak, basically every LocalReportnever gets disposed.
This is my code:
public void ProcessReport(ReportDataSource[] reportDS, string reportPath)
{
const string format = "PDF";
string deviceInfo = null;
string encoding = String.Empty;
string mimeType = String.Empty;
string extension = String.Empty;
Warning[] warnings = null;
string[] streamIDs = null;
Byte[] pdfArray = null;
using (var report = new LocalReport())
{
report.EnableExternalImages = true;
report.ReportEmbeddedResource = reportPath;
report.Refresh();
foreach (var rds in reportDS)
{
report.DataSources.Add(rds);
}
report.Refresh();
try
{
pdfArray = report.Render(format, deviceInfo, out mimeType, out encoding,
out extension, out streamIDs, out warnings);
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException.Message);
throw;
}
report.ReleaseSandboxAppDomain();
report.Dispose();
//Add pdfArray to MemoryStream and then to PDF - Doesn't leak
}
}
I found the memory leak just by looking to Visual Studio memory panel, every time report.Render get's called it add 20-30mb and they never go down until I close the application. I'm sure that using the MemoryStreamis not the issue because even if commented I still get 200mb-250mb in memory that never get released. This is bad because after running this application like 3-4 times it reaches >1GB until it doesn't even run anymore. I also tried to manually call the GarbageCollector but didn't work. The application is 32 bit.
What can I do to fix this ?
I have a real solution and can explain why!
It turns out that LocalReport here is using .NET Remoting to dynamically create a sub appdomain and run the report in order to avoid a leak internally somewhere. We then notice that, eventually, the report will release all the memory after 10 to 20 minutes. For people with a lot of PDFs being generated, this isn't going to work. However, the key here is that they are using .NET Remoting. One of the key parts to Remoting is something called "Leasing". Leasing means that it will keep that Marshal Object around for a while since Remoting is usually expensive to setup and its probably going to be used more than once. LocalReport RDLC is abusing this.
By default, the leasing time is... 10 minutes! Also, if something makes various calls into it, it adds another 2 minutes to the wait time! Thus, it can randomly be between 10 and 20 minutes depending how the calls line up. Luckily, you can change how long this timeout happens. Unluckily, you can only set this once per app domain... Thus, if you need remoting other than PDF generation, you will probably need to make another service running it so you can change the defaults. To do this, all you need to do is run these 4 lines of code at startup:
LifetimeServices.LeaseTime = TimeSpan.FromSeconds(5);
LifetimeServices.LeaseManagerPollTime = TimeSpan.FromSeconds(5);
LifetimeServices.RenewOnCallTime = TimeSpan.FromSeconds(1);
LifetimeServices.SponsorshipTimeout = TimeSpan.FromSeconds(5);
You'll see the memory use start to rise and then within a few seconds you should see the memory start coming back down. Took me days with a memory profiler to really track this down and realize what was happening.
You can't wrap ReportViewer in a using statement (Dispose crashes), but you should be able to if you use LocalReport directly. After that disposes, you can call GC.Collect() if you want to be doubly sure you are doing everything you can to free up that memory.
Hope this helps!
Edit
Apparently, you should call GC.Collect(0) after generating a PDF report or else it appears the memory use could still get high for some reason.

Windows named pipe in node js (preferred shared memory)

I am using named pipe to share some data between 2 processes in windows. One is a node process and other is a C# process. Here is a sample of code I use in my node process:
var net = require('net');
var PIPE_NAME = "mypipe";
var PIPE_PATH = "\\\\.\\pipe\\" + PIPE_NAME;
var L = console.log;
var server = net.createServer(function(stream) {
L('Server: on connection')
stream.on('data', function(c) {
L('Server: on data:', c.toString());
});
stream.on('end', function() {
L('Server: on end')
server.close();
});
stream.write('Take it easy!');
});
server.on('close',function(){
L('Server: on close');
})
server.listen(PIPE_PATH,function(){
L('Server: on listening');
})
I use a NamedPipeClientStream in c# to read the data. I do this in a loop on both the sides, such as my node process is a producer and C# process is a consumer.
This works fine.
But sometimes the C# loop hangs and at that point in my node process I want to overwrite the new data over the old data. I was wondering if I can specify some max size in my pipe (the one I create in nodejs) or a timeout for the data but couldn't find such things in standard documentation.
If it cannot be solved this way, there is a shared memory route to solve the problem but I couldn't find any stable shared memory library for nodejs which works nicely on windows (and I don't have much time to write one right now). I need some pointers to move in the right direction.
Any advice is appreciated. Thanks.
EDIT: I would really want to implement the above stuff using shared memory since I need to share large amount of data at a fast rate and I need to tweak for performance. Any pointers on how to implement it?
I figured out a way to use the drain event in writable stream of nodejs as per my requirement.

Possible memory leak while using AutomationElement Class

I am writing a program which logs my browsing history in a text file. It grabs url from chrome window and writes it to text file. It gets chrome window handle as a parameter and writes url in out parameter. The code looks like this:
static AutomationElement elm;// = AutomationElement.FromHandle(handle);
static AutomationElement elmUrlBar;
public static void GetChromeUrl(IntPtr handle, out string url)
{
string namedProperty = "Address and search bar" ;
url = null;
elm = AutomationElement.FromHandle(handle);
elmUrlBar = elm.FindFirst(TreeScope.Descendants,
new PropertyCondition(AutomationElement.NameProperty, namedProperty));
if (elmUrlBar != null)
{
AutomationPattern[] patterns = elmUrlBar.GetSupportedPatterns();
if (patterns.Length > 0)
{
ValuePattern val = (ValuePattern)elmUrlBar.GetCurrentPattern(patterns[0]);
url = val.Current.Value;
}
}
}
I call this method using timer callback in every say 5 seconds and it returns me correct url from chrome browser window. So it does it's work well although it seems that it doesn't free up memory occupied by AutomationElement object and RAM used by this application is constantly growing. I profiled it using dotMemory 4.0 and ants memory profiler 8 and it shows that automationElement objecrs are created but never deleted by garbage collector. Does anybody know how to resolve this issue?
The company I currently worked at ran into this problem as well.
This article has the answer to the problem in it UIAutomation Memory Issue
You basically need away to call GC from the application you are automating because automation elements will get added to the large object heap and take 3~ minutes to be disposed of if no pointers exist to them at that point.

Digital Persona Verifying process take a lot of time with huge database

I'm developing a software that is using the Digital Persona U.are.U 4000b fingerprint reader.
It's working OK. But I'm getting performance problems during fingerprint verification.
My database has around 3.000 fingerprints registered there and I need to LOOP all of them during the verify process.
But every successful fingerprint reading take around 7 seconds to match the respective record of my database (it depends on its index).
It's not an acceptable scenario for me, because I need to register (and show their data, photo ... in real-time) at least 400 students in an interval of 20 minutes.
The problem is really the huge fingerprints database, because when I tested it with a smaller one, it worked fine.
I'm using .NET with C# and a Free SDK for the fingerprints.
The line of code that is causing this trouble is that one, which is executed into a FOREACH (for each registered fingerprint of the database):
verificator.Verify(features, template, ref result);
verificator is a DPFP.Verification.Verification object which treats the verification process;
features is a DPFP.FeatureSet object which contains the data of the actual fingerprint;
template is a DPFP.Template object which represents each of the registered fingerprints;
result is a DPFP.Verification.Verification.Result object which contains the return of each fingerprint validation.
Here is the whole process method:
protected void process(DPFP.Sample sample)
{
DPFP.FeatureSet features = ExtractFeatures(sample, DPFP.Processing.DataPurpose.Verification);
bool verified = false;
if (features != null)
{
DPFP.Verification.Verification.Result result = new DPFP.Verification.Verification.Result();
//"allTemplates" is an List<> of objects that contains all the templates previously loaded from DB
//There is no DB access in these lines of code
foreach (var at in allTemplates)
{
verificator.Verify(features, at.template, ref result);
if (result.Verified)
{
register(at.idStudent);
verified = true;
break;
}
}
}
if (!verified)
error("Invalid student.");
}
Am I doing it correctly?
There is another way of doing that work?
I solved my problem by purchasing (I "won" it, because I had already bought a reader) the new version of the SDK, that already implements the identify (1:n) function.
You can get more information and download (purchase) the SDK at their website.
Try out SourceAFIS. It's open source and if you cache the fingerprints in memory it performs the sort of 1-N identify processes you're talking about at faster than 10k fingerprints /second. The source is also 100% C#.
is better to convert the template to string
byte [] a = new byte [1632];
Template.Serialize (ref a);
string Convert.ToBase64String basestring = (a);
and then return to normal the template
byte [] b = new byte [1632];
b = Convert.FromBase64String (trace) / / pass the base-64 string to a byte array.
/ / create a Template type varibale where we store the template
DPFP.Template DPFP.Template template = new ();
/ / what to compare it desserializamos
template.DeSerialize (b);

Recursive directory traversal/tree consumes extreme amounts of memory

I have written a recursive directory traversal method in C# (hosted from an asp.net page). The code works as I intended (I enumerate a list of shares on a target machine then recurse through the shares and add each file/directory to a TreeView). Unfortunately this consumes an extreme amount of memory and takes a very long time to run, opening the aspx page causes the Webdev.Webserver ram usage to spike to 800 megabytes, and the Chrome instance viewing the page consumes a whopping 1.5GB of RAM! (running the test code against SMB shares hosted on my local workstation) I can't even view the page source without chrome hanging.
foreach (TreeNode n in FileSelectList.Nodes)
{
Dir_Node_Recurse(n, hostName);
//break;
}
Uncommenting out the //break; statement results in only the first directory share being processed, and this consumes far less memory. FileSelectList is an Asp:TreeView.
public static void Dir_Node_Recurse(TreeNode node, string hostName)
{
DirectoryInfo dir = new DirectoryInfo(String.Format(#"\\{0}\{1}",
hostName,
node.ValuePath.ToString()
));
TreeNode tNode;
foreach (var i in dir.EnumerateDirectories())
{
tNode = new TreeNode(i.Name.ToString());
node.ChildNodes.Add(tNode);
Dir_Node_Recurse(tNode, hostName);
}
foreach (var i in dir.EnumerateFiles())
{
node.ChildNodes.Add(new TreeNode(i.Name.ToString()));
}
}
This appears to cause extreme resource usage because of the large number of TreeNode objects being created. Should I create my own node type to perhaps minimize memory usage, or is there another technique that would make this usable?
Is there a reason you need to get all the nodes? Can you use an on demand approach?
You can also profile the code. You can try pointing the code to a smaller directory and observe it's behavior.
What do you want to do?
You are creating a huge page and asking how to make it consume less memory? That's obvious – don't show all the tree in the page, it's never going to be useful to any user anyway.
You can limit the output to only several levels, for example.

Categories