I am writing a program which logs my browsing history in a text file. It grabs url from chrome window and writes it to text file. It gets chrome window handle as a parameter and writes url in out parameter. The code looks like this:
static AutomationElement elm;// = AutomationElement.FromHandle(handle);
static AutomationElement elmUrlBar;
public static void GetChromeUrl(IntPtr handle, out string url)
{
string namedProperty = "Address and search bar" ;
url = null;
elm = AutomationElement.FromHandle(handle);
elmUrlBar = elm.FindFirst(TreeScope.Descendants,
new PropertyCondition(AutomationElement.NameProperty, namedProperty));
if (elmUrlBar != null)
{
AutomationPattern[] patterns = elmUrlBar.GetSupportedPatterns();
if (patterns.Length > 0)
{
ValuePattern val = (ValuePattern)elmUrlBar.GetCurrentPattern(patterns[0]);
url = val.Current.Value;
}
}
}
I call this method using timer callback in every say 5 seconds and it returns me correct url from chrome browser window. So it does it's work well although it seems that it doesn't free up memory occupied by AutomationElement object and RAM used by this application is constantly growing. I profiled it using dotMemory 4.0 and ants memory profiler 8 and it shows that automationElement objecrs are created but never deleted by garbage collector. Does anybody know how to resolve this issue?
The company I currently worked at ran into this problem as well.
This article has the answer to the problem in it UIAutomation Memory Issue
You basically need away to call GC from the application you are automating because automation elements will get added to the large object heap and take 3~ minutes to be disposed of if no pointers exist to them at that point.
Related
In my application (.NET Framework 4.5) I'm rendering some RDLC reports (50-60) in order to export them to a single PDF.
Unfortunately there seems to be a big memory leak, basically every LocalReportnever gets disposed.
This is my code:
public void ProcessReport(ReportDataSource[] reportDS, string reportPath)
{
const string format = "PDF";
string deviceInfo = null;
string encoding = String.Empty;
string mimeType = String.Empty;
string extension = String.Empty;
Warning[] warnings = null;
string[] streamIDs = null;
Byte[] pdfArray = null;
using (var report = new LocalReport())
{
report.EnableExternalImages = true;
report.ReportEmbeddedResource = reportPath;
report.Refresh();
foreach (var rds in reportDS)
{
report.DataSources.Add(rds);
}
report.Refresh();
try
{
pdfArray = report.Render(format, deviceInfo, out mimeType, out encoding,
out extension, out streamIDs, out warnings);
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException.Message);
throw;
}
report.ReleaseSandboxAppDomain();
report.Dispose();
//Add pdfArray to MemoryStream and then to PDF - Doesn't leak
}
}
I found the memory leak just by looking to Visual Studio memory panel, every time report.Render get's called it add 20-30mb and they never go down until I close the application. I'm sure that using the MemoryStreamis not the issue because even if commented I still get 200mb-250mb in memory that never get released. This is bad because after running this application like 3-4 times it reaches >1GB until it doesn't even run anymore. I also tried to manually call the GarbageCollector but didn't work. The application is 32 bit.
What can I do to fix this ?
I have a real solution and can explain why!
It turns out that LocalReport here is using .NET Remoting to dynamically create a sub appdomain and run the report in order to avoid a leak internally somewhere. We then notice that, eventually, the report will release all the memory after 10 to 20 minutes. For people with a lot of PDFs being generated, this isn't going to work. However, the key here is that they are using .NET Remoting. One of the key parts to Remoting is something called "Leasing". Leasing means that it will keep that Marshal Object around for a while since Remoting is usually expensive to setup and its probably going to be used more than once. LocalReport RDLC is abusing this.
By default, the leasing time is... 10 minutes! Also, if something makes various calls into it, it adds another 2 minutes to the wait time! Thus, it can randomly be between 10 and 20 minutes depending how the calls line up. Luckily, you can change how long this timeout happens. Unluckily, you can only set this once per app domain... Thus, if you need remoting other than PDF generation, you will probably need to make another service running it so you can change the defaults. To do this, all you need to do is run these 4 lines of code at startup:
LifetimeServices.LeaseTime = TimeSpan.FromSeconds(5);
LifetimeServices.LeaseManagerPollTime = TimeSpan.FromSeconds(5);
LifetimeServices.RenewOnCallTime = TimeSpan.FromSeconds(1);
LifetimeServices.SponsorshipTimeout = TimeSpan.FromSeconds(5);
You'll see the memory use start to rise and then within a few seconds you should see the memory start coming back down. Took me days with a memory profiler to really track this down and realize what was happening.
You can't wrap ReportViewer in a using statement (Dispose crashes), but you should be able to if you use LocalReport directly. After that disposes, you can call GC.Collect() if you want to be doubly sure you are doing everything you can to free up that memory.
Hope this helps!
Edit
Apparently, you should call GC.Collect(0) after generating a PDF report or else it appears the memory use could still get high for some reason.
I'm trying to make a .net 4.0 program which will get the filename of a PowerPoint Show (PPS) that is being viewed on the same machine. The purpose is to time how long a particular presentation takes to go through without any user action.
I've tried the following:
oPPTApp = (PowerPoint.Application)System.Runtime.InteropServices.Marshal.
GetActiveObject("PowerPoint.Application");
string ppsname = oPPTApp.ActivePresentation.Name.ToLower();
The program runs minimized while the presentation is being viewed. Even while interacting with the PPS (scrolling through slides, clicking, etc), I get the following error from Microsoft.Office.Interop.PowerPoint._Application:
Application (unknown member) : Invalid request. There is no active presentation.
After searching through http://support.microsoft.com/kb/285472, I tried adding:
oPPTApp.Visible = Microsoft.Office.Core.MsoTriState.msoTrue;
However, this started a new instance of PowerPoint, followed by the same error.
I'll also mention I'm using Microsoft PowerPoint 14.0 Object Library and the IOleMessageFilter error handler from here: http://msdn.microsoft.com/en-us/library/vstudio/ms228772(v=vs.110).aspx
Does ActivePresentation.Name not apply to PPS files? Is there a better way of getting the filename of an open PPS?
Why not use plain old System.Diagnostics.Process?
The following code will give you the name of the active window of all processes with a given friendly name (process name without extension; in your case "POWERPNT").
using System.Diagnostics;
public static IEnumerable<string> GetActiveMainWindowsTitle(string processName)
{
var ps = Process.GetProcessesByName(processName);
foreach (var p in ps)
yield return p.MainWindowTitle;
}
The MainWindowTitle of a MS PowerPoint process is the name of the active presentation so if you are only looking for that info, this should do (you might need to clean the title a little if you are running, for example, a Home and Student version; usually there is a "non comercial use" tag on the main window title)
I'm currently working on an application which runs on Windows Mobile 6.1 (not WP). I built an application which synchronizes data from a remote server multiple times a day. But somehow it looks like this data is "remembered" after finishing. Task Manager shows that about 3MB is used at a regular start of the application, which increases with about 2MB everytime I run the synchronization. After multiple times I get a warning of the memory usage and I have to reset the device or restart the program.
What I'm looking for is some way to clear data after synchronization, a kind of garbage collector. In (regular) C# I've found Collect(), but I can't get this working in C# mobile.
Below is my code, which is working correctly, except at a certain point I get the message "Geheugentekort" ("Memory shortage").
Probably after the for{} code, I have to empty variables like doc, root, and the XmlNodeList, but the question is how...
My device: Pidion BIP-5000
OS: Windows Mobile 6.1
XmlDocument doc = new XmlDocument();
doc.Load(xmlUrl);
XmlElement root = doc.DocumentElement;
try
{
totaal = Int32.Parse(doc.GetElementsByTagName("Totaal")[0].InnerText.ToString());
// Create lists with values
XmlNodeList namen = doc.GetElementsByTagName("naam");
XmlNodeList ptypen = doc.GetElementsByTagName("ptype");
XmlNodeList ids = doc.GetElementsByTagName("id");
// Door het totaal heen itereren
for (int i = 0; i < totaal; i++)
{
// Create variables of it
int id = Int32.Parse(ids[i].InnerText.ToString());
int ptype = Int32.Parse(ptypen[i].InnerText.ToString());
string naam = namen[i].InnerText.ToString();
// Check if ID exists
int tot = this.tbl_klantTableAdapter.GetData(id).Count;
if (tot == 0)
{
// New item, add
this.tbl_klantTableAdapter.Insert(naam, ptype, id);
}
else
{
// Existing, update
this.tbl_klantTableAdapter.Update(naam, ptype, id);
}
}
}
catch
{
// Rest of code
Dispose Your nodelists after the loop may help
System.Xml.XmlNodeList tempNodelist = Your stuff;
IDisposable disposeMe = tempNodelist as IDisposable;
if (disposeMe != null)
{
disposeMe.Dispose();
}
XmlNodeList implements IDisposable, so you can call namen.Dispose() (also for the other XmlNodeList objects) to force the objects to be discarded and cleaned up.
Yes, you definitely should use the XML stuff locally and dispose after using the XML stuff. The xml stuff seems to ocupie large memory blocks.
You should use nameX.Dispose() and nameX=null to free up the memory used for these temporary xml objects.
You may use GC.Collect() to force memory collection: http://blogs.msdn.com/b/stevenpr/archive/2004/07/26/197254.aspx.
You may also use remote .Net performance viewer to get insides on memory usage: http://blogs.msdn.com/b/stevenpr/archive/2006/04/17/577636.aspx
If your app is consuming much memory before calling into the sync task, you may consider of creating a new application with a separate process for the sync stuff. You can also free up memory for your process when you move functions to a library. WM6.1 and higher have a new memory slot for compact Framework libraries, so the main process memory slot is not lowered: http://blogs.msdn.com/b/robtiffany/archive/2009/04/09/memmaker-for-the-net-compact-framework.aspx
If you need more help you should provide more details/code.
IList<string> values = new List<string>();
var instance = Find.By("hwnd", "110CC");
...
if(instance != null)
{
var ie = Browser.AttachTo<IE>(instance);
The browser instance is manually started by the tester in case this makes any difference.
This just doesn't work for me I keep getting an exception from watin saying that it can't find a window with that handle.
I got the handle with Spy++.
I tried searching by window title or window url also but it also didn't work.
Is there any way to do this?
Thank you
The below works as expected / no errors. WatiN 2.1, IE9, Win7
Before running the code, open an IE browser and point it at cnn.com
IE browser = Browser.AttachTo<IE>(Find.ByUrl("www.cnn.com"));
browser.TextField("hdr-search-box").TypeText("searchy");
You can't have your cake and eat it too, apparently.
I'm currently using the System.Windows.Forms.WebBrowser in my application. The program currently depends on using the GetElementsByTagName function. I use it to gather up all the elements of a certain type (either "input"s or "textarea"s), so I can sort through them and return the value of a specific one. This is the code for that function (my WebBrowser is named web1):
// returns the value from a element.
public String FetchValue(String strTagType, String strName)
{
HtmlElementCollection elems;
HtmlDocument page = web1.Document.Window.Frames[1].Document;
elems = page.GetElementsByTagName(strTagType);
foreach (HtmlElement elem in elems)
{
if (elem.GetAttribute("name") == strName ||
elem.GetAttribute("ref") == strName)
{
if (elem.GetAttribute("value") != null)
{
return elem.GetAttribute("value");
}
}
}
return null;
}
(points to note: the webpage I need to pull from is in a frame, and depending on circumstances, the element's identifying name will be either in the name or the ref attribute)
All of that works like a dream with the System.Windows.Forms.WebBrowser.
But what it is unable to do, is redirect the opening of a new window to remain in the application. Anything that opens in a new window shoots to the user's default browser, thus losing the session. This functionality can be easily fixed with the NewWindow2 event, which System.Windows.Forms.WebBrowser doesn't have.
Now forgive me for being stunned at its absence. I have but recently ditched VB6 and moved on to C# (yes VB6, apparently I am employed under a rock), and in VB6, the WebBrowser possessed both the GetElementsByTagName function and the NewWindow2 event.
The AxSHDocVw.WebBrowser has a NewWindow2 event. It would be more than happy to help me route my new windows to where I need them. The code to do this in THAT WebBrowser is (frmNewWindow being a simple form containing only another WebBrowser called web2 (Dock set to Fill)):
private void web1_NewWindow2(
object sender,
AxSHDocVw.DWebBrowserEvents2_NewWindow2Event e)
{
frmNewWindow frmNW = new frmNewWindow();
e.ppDisp = frmNW.web2.Application;
frmNW.web2.RegisterAsBrowser = true;
frmNW.Visible = true;
}
I am unable to produce on my own a way to replicate that function with the underwhelming regular NewWindow event.
I am also unable to figure out how to replicate the FetchValue function I detailed above using the AxSHDocVw.WebBrowser. It appears to go about things in a totally different way and all my knowledge of how to do things is useless.
I know I'm a sick, twisted man for this bizarre fantasy of using these two things in a single application. But can you find it in your heart to help this foolish idealist?
I could no longer rely on the workaround, and had to abandon System.Windows.Forms.WebBrowser. I needed NewWindow2.
I eventually figured out how to accomplish what I needed with the AxWebBrowser. My original post was asking for either a solution for NewWindow2 on the System.Windows.Forms.WebBrowser, or an AxWebBrowser replacement for .GetElementsByTagName. The replacement requires about 4x as much code, but gets the job done. I thought it would be prudent to post my solution, for later Googlers with the same quandary. (also in case there's a better way to have done this)
IHTMLDocument2 webpage = (IHTMLDocument2)webbrowser.Document;
IHTMLFramesCollection2 allframes = webpage.frames;
IHTMLWindow2 targetframe = (IHTMLWindow2)allframes.item("name of target frame");
webpage = (IHTMLDocument2)targetframe.document;
IHTMLElementCollection elements = webpage.all.tags("target tagtype");
foreach (IHTMLElement element in elements)
{
if (elem.getAttribute("name") == strTargetElementName)
{
return element.getAttribute("value");
}
}
The webbrowser.Document is cast into an IHTMLDocument2, then the IHTMLDocument2's frames are put into a IHTMLFramesCollection2, then I cast the specific desired frame into an IHTMLWindow2 (you can choose frame by index # or name), then I cast the frame's .Document member into an IHTMLDocument2 (the originally used one, for convenience sake). From there, the IHTMLDocument2's .all.tags() method is functionally identical to the old WebBrowser.Document.GetElementsByTagName() method, except it requires an IHTMLElementCollection versus an HTMLElementCollection. Then, you can foreach the collection, the individual elements needing to be IHTMLElement, and use .getAttribute to retrieve the attributes. Note that the g is lowercase.
The WebBrowser control can handle the NewWindow event so that new popup windows will be opened in the WebBrowser.
private void webBrowser1_NewWindow(object sender, CancelEventArgs e)
{
// navigate current window to the url
webBrowser1.Navigate(webBrowser1.StatusText);
// cancel the new window opening
e.Cancel = true;
}
http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/361b6655-3145-4371-b92c-051c223518f2/
The only solution to this I have seen was a good few years ago now, called csExWb2, now on Google code here.
It gives you an ExWebBrowser control, but with full-on access to all the interfaces and events offered by IE. I used it to get deep and dirty control of elements in a winforms-hosted html editor.
It may be a bit of a leap jumping straight into that, mind.