I am using Gecko Web browser version 21.0.1 and .net Framework 4.0 in my windows application.
When I navigate to certain web pages I get Pop up confirm message:
This web page is being redirected to a new location. Would you like to
resend the form data you have typed to the new location?
How can I disable this kind of messages?
So far I have tried the following settings, but they didn't help:
GeckoPreferences.User["security.warn_viewing_mixed"] = false;
GeckoPreferences.User["plugin.state.flash"] = 0;
GeckoPreferences.User["browser.cache.disk.enable"] = false;
GeckoPreferences.User["browser.cache.memory.enable"] = false;
You could try providing you own nsIPromptService2 / nsIPrompt implementation.
Run this early on program start up (Although after XPCom.Initalize)
PromptFactory.PromptServiceCreator = () => new FilteredPromptService();
Where FilteredPromptService is defined something like this:
internal class FilteredPromptService : nsIPromptService2, nsIPrompt
{
private static PromptService _promptService = new PromptService();
public void Alert(nsIDOMWindow aParent, string aDialogTitle, string aText)
{
if(/*want default behaviour */)
{
_promptService.Alert(aDialogTitle, aText);
}
// Else do nothing
}
// TODO: implement other methods in similar fashion. (returning appropriate return values)
}
You will also need to make sure that error pages are not enabled:
GeckoPreferences.User["browser.xul.error_pages.enabled"] = false;
Related
If you run the following code, then at each iteration of the cycle, the browser will bring up on the front and get focus.
public class Program
{
private static void Main()
{
var driver = new ChromeDriver();
driver.Navigate().GoToUrl("https://i.imgur.com/cdA7SBB.jpg");
for (int i = 0; i < 100; i++)
{
var ss = ((ITakesScreenshot)driver).GetScreenshot();
ss.SaveAsFile("D:/imgs/i.jpg");
}
}
}
The question is: why does this happen and can it be turned off? headless mod does not fit.
It seems that this always happens when Selenium needs to save / read the file or start the process.
To take a screenshot, chromedriver activates the window. It's by design and there's no option to avoid it even though it's technically possible.
For the relevant sources have a look at window_commands.cc.
You could however avoid the effect by moving the window off-screen:
driver.Manage().Window.Position = new Point(-32000, -32000);
or by launching the browser off-screen:
var options = new ChromeOptions();
options.AddArgument("--window-position=-32000,-32000");
UPDATE
You can avoid the activation by taking the screenshot directly via the devtool API. Here's a class to override GetScreenshot:
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.Remote;
using JObject = System.Collections.Generic.Dictionary<string, object>;
class ChromeDriverEx : ChromeDriver
{
public ChromeDriverEx(ChromeOptions options = null)
: base(options ?? new ChromeOptions()) {
var repo = base.CommandExecutor.CommandInfoRepository;
repo.TryAddCommand("send", new CommandInfo("POST", "/session/{sessionId}/chromium/send_command_and_get_result"));
}
public new Screenshot GetScreenshot() {
object response = Send("Page.captureScreenshot", new JObject {{"format", "png"}, {"fromSurface", true}});
string base64 = (string)((JObject)response)["data"];
return new Screenshot(base64);
}
protected object Send(string cmd, JObject args) {
return this.Execute("send", new JObject {{"cmd", cmd}, {"params", args}}).Value;
}
}
usage:
var driver = new ChromeDriverEx();
driver.Url = "https://stackoverflow.com";
driver.GetScreenshot().SaveAsFile("/tmp/screenshot.png");
driver.Quit();
When you invoke Navigate().GoToUrl("url") method through your Automation script, it is expected that your script will be interacting with some of the elements on the webpage. So for Selenium to interact with those elements, Selenium needs focus. Hence opening up the browser, bring up on the front and getting the focus is the default phenomenon implemented through Navigate().GoToUrl("url").
Now Default Mode or Headless Mode is controlled by the ChromeOption/FirefoxOptions class which is passed as an argument while initializing the WebDriver instance and will call Navigate().GoToUrl("url"). So, Navigate().GoToUrl("url") would have no impact how the WebDriver instance is controlling the Mode of Operation i.e. Default Mode or Headless Mode.
Now when you try to invoke the method from ITakesScreenshot Interface i.e. ITakesScreenshot.GetScreenshot Method which is defined as :
Gets a Screenshot object representing the image of the page on the screen.
In case of WebDriver instance which extends ITakesScreenshot, makes the best effort depending on the browser to return the following in order of preference:
Entire page
Current window
Visible portion of the current frame
The screenshot of the entire display containing the browser
There may be some instances when the browser looses the focus. In that case you can use IJavascriptExecutor to regain the focus as follows :
((IJavascriptExecutor) driver).executeScript("window.focus();");
I was struggling with an issue when generic GetScreenshot() in parallel testing was causing browser to lose focus. Some elements were being removed from DOM and my tests were failing. I've come up with a working solution for Edge and Chrome 100+ with Selenium 4.1:
public Screenshot GetScreenshot()
{
IHasCommandExecutor executor = webDriverInstance as IHasCommandExecutor;
var sessionId = ((WebDriver)webDriverInstance).SessionId;
var command = new HttpCommandInfo(HttpCommandInfo.PostCommand, $"/session/{sessionId}/chromium/send_command_and_get_result");
executor.CommandExecutor.TryAddCommand("Send", command);
var response = Send(executor, "Page.captureScreenshot", new JObject { { "format", "png" }, { "fromSurface", true } });
var base64 = ((Dictionary<string, object>)response.Value)["data"];
return new Screenshot(base64.ToString());
}
private Response Send(IHasCommandExecutor executor, string cmd, JObject args)
{
var json = new JObject { { "cmd", cmd }, { "params", args } };
var command = new Command("Send", json.ToString());
return executor.CommandExecutor.Execute(command);
}
I am using GeckoFx to perform a login to a specific website. This website edits the page with new information should the login fail (or require additional authentication, such as a ReCaptcha). Unfortunately, it is vital that I have access an event when the page is updated. I have tried numerous approaches mainly
A continual check if the uri is still the same upon each login attempt and a subsequent check on the specific element in question (to see if the display: none property was changed. (This resulted in an infinite loop as it seems GeckoFx updates the page in a nonblocking way, causing the program to go into an infinite loop)
Sleeping for ~5 seconds between login requests and using the aforementioned uri check. All this did (predictably, I was grasping at straws) was freeze the browser for 5 seconds and still fail to update the page
Searching the GeckoFx codebase for a specific event when the page is updated similar to the DocumentCompleted event (no such luck).
The most common approach I have read about (and one that makes the most sense) is to use a MutationObserver. It seems that all of the answers across the internet involve injecting Javascript in order to perform the requisite task. Seeing as all of my programming background has not touched web development whatsoever, I'm trying to stick to what I know.
Here is my approach so far, unfortunately, it is not much.
public class GeckoTestWebLogin
{
private readonly string _user;
private readonly string _pass;
public GeckoWebBrowser Gweb;
public Uri LoginUri { get; } = new Uri("https://website.com/login/");
public bool LoginCompleted { get; private set; } = false;
public bool Loaded { get; private set; } = false;
public GeckoTestWebLogin(string user, string pass)
{
_user = user;
_pass = pass;
Xpcom.EnableProfileMonitoring = false;
Xpcom.Initialize("Firefox");
//this code is for testing purposes, it will be removed upon project completion
CookieManager.RemoveAll();
Gweb = new GeckoWebBrowser();
Gweb.DocumentCompleted += DocLoaded;
//right about here is where I get lost, where can I set a callback method for the observer to report back to? Is this even how it works?
MutationObserver mutationObserver = new MutationObserver(Gweb.Window.DomWindow, (nsISupports)Gweb.Document.DomObject);
}
private void TestObservedEvent(string parms, object[] objs)
{
MessageBox.Show("The page was changed # " + DateTime.Now);
}
public void DocLoaded(object obj, GeckoDocumentCompletedEventArgs e)
{
Loaded = true;
if (Gweb.Url != LoginUri) return;
AttemptLogin();
}
private void AttemptLogin()
{
GeckoElementCollection elements = Gweb.Document.GetElementsByTagName("input");
foreach (GeckoHtmlElement element in elements)
{
switch (element.Id)
{
case "username":
element.SetAttribute("value", _user);
break;
case "password":
element.SetAttribute("value", _pass);
break;
case "importantchangedinfo":
GeckoHtmlElement authcodeModal =
(GeckoHtmlElement)
Gweb.Document.GetElementsByClassName("login_modal").First();
if (authcodeModal.Attributes["style"].NodeValue != "display: none")
{
InputForm form = new InputForm { InputDescription = "Captcha Required!" };
form.ShowDialog();
elements.FirstOrDefault(x => x.Id == "captchabox")?.SetAttribute("value", form.Input);
}
break;
}
}
elements.FirstOrDefault(x => x.Id == "Login")?.Click();
}
public void Login()
{
//this will cause the DocLoaded event to fire after completion
Gweb.Navigate(LoginUri.ToString());
}
}
As stated in the above code in the comments, I am completely lost at
MutationObserver mutationObserver = new MutationObserver(Gweb.Window.DomWindow, (nsISupports)Gweb.Document.DomObject);
I can't seem to find anything in GeckoFx's source for MutationObserver that would allow me to set a callback/event/whathaveyou. Is my approach the correct way to go about things or am I left with no options other than to inject Javascript into the page?
Much appreciated, thank you in advance.
Here is my attempt at option 2 in Tom's answer:
(Added into GeckoTestWebLogin)
public void DocLoaded(object obj, GeckoDocumentCompletedEventArgs e)
{
Loaded = true;
if (Gweb.Url != LoginUri) return;
MutationEventListener mutationListener = new MutationEventListener();
mutationListener.OnDomMutation += TestObservedEvent;
nsIDOMEventTarget target = Xpcom.QueryInterface<nsIDOMEventTarget>(/*Lost here*/);
using (nsAString modified = new nsAString("DOMSubtreeModified"))
target.AddEventListener(modified, mutationListener, true, false, 0);
AttemptLogin();
}
MutationEventListener.cs:
public delegate void OnDomMutation(/*DomMutationArgs args*/);
public class MutationEventListener : nsIDOMEventListener
{
public event OnDomMutation OnDomMutation;
public void HandleEvent(nsIDOMEvent domEvent)
{
OnDomMutation?.Invoke(/*new DomMutationArgs(domEvent, this)*/);
}
}
I don't think Geckofx's webidl compiler is currently advanced enough to generate the callback constructor.
Option 1. - Enhance MutationObserver source.
You could modify MutationObserver source manually to add the necessary constructor callback. Then recompile Geckofx. (I haven't look to see how difficult this is)
Option 2. - Use old style Mutation events.
public class DOMSubtreeModifiedEventListener : nsIDOMEventListener
{
... // Implement HandleEvent
}
Then something like (maybe in DocumentCompleted event handler):
_domSubtreeModifiedEventListener = new DOMSubtreeModifiedEventListener(this);
var target = Xpcom.QueryInterface<nsIDOMEventTarget>(body);
using (nsAString subtreeModified = new nsAString("DOMSubtreeModified"))
target.AddEventListener(subtreeModified, _domSubtreeModifiedEventListener, true, false, 0);
Option 3. - Use Idle + Check.
Add an winforms Application.idle event handler - and examine the document, to know when its ready.
Option 4. - Inject a javascript callback.
(As you have already mentioned) - This example is waiting until after a resize is done.
basically inject: "<body onresize=fireResizedEventAfterDelay()>" : then inject something like this:
string fireResizedEventAfterDelayScript = "<script>\n" +
"var resizeListner;" +
"var msDelay = 20;" +
"function fireResizedEventAfterDelay() {" +
"clearTimeout(resizeListner);" +
"resizeListner = setTimeout(function() { document.dispatchEvent (new MessageEvent('resized')); }, msDelay);" +
"}\n" +
"</script>\n";
Then in the C#:
browser.AddMessageEventListener("resized", (s) => runafterImDone())
I am trying to write an orchard module which will get some HTML pages from predefined urls (javascript executed) to extract some html from it. i ended up by Awesomium browser engine.as Awesomium intended to be run in a single thread , i have created a SingletonDependency and a thread in it which will continue running while the module is alive.thus far any thing is not a problem and the program works fine in local machine.but when i deploy the module on the server the Awesomium documentReady event never gets fired.i have tested the library (Awesomium) in a test ASP.NET mvc app and it works as expected.
following are some code snippet from what i have done so far :
public class DownloadService : IDownloadService
{
private Thread downloaderThread;
const int MillisecondsTimeout = 30000;
private List<DownloadQueueItem> downloadQueue = new System.Collections.Generic.List<DownloadQueueItem>();
private void DoDownloadTask()
{
while (true)
{
if (downloadQueue.Any())
{
var itemToDownload = downloadQueue.FirstOrDefault(qi => qi.IsCompleted == false);
if (itemToDownload != null)
{
var webPreferences = WebPreferences.Default;
using (var session = WebCore.CreateWebSession(webPreferences))
{
using (var view = WebCore.CreateWebView(800, 600, session))
{
view.Source = new Uri(itemToDownload.Url, UriKind.Absolute);
bool finishedLoading = false;
bool isReady = false;
view.LoadingFrameComplete += (s, e) =>
{
if (e.IsMainFrame)
finishedLoading = true;
};
view.DocumentReady += (s, e) =>
{
isReady = true;
};
int timeSpent = 0;
while (!finishedLoading || !isReady)
{
Thread.Sleep(500);
WebCore.Update();
timeSpent += 500;
if (timeSpent > MillisecondsTimeout)
{
isReady = true;
finishedLoading = true;
itemToDownload.Result = "";
}
}
if (timeSpent < MillisecondsTimeout)
{
itemToDownload.Result = view.ExecuteJavascriptWithResult("document.getElementsByTagName('html')[0].outerHTML");
}
itemToDownload.IsCompleted = true;
}
}
}
}
downloadQueue = downloadQueue.Where(qi => !qi.IsProcessed).ToList();
Thread.Sleep(500);
}
}
}
Update
The scenario : I am trying to setup an infinite loop to check for update within an specific interval and download some HTML pages and search for special tags in them with specific selectors (by means of CsQuery in my case) and then initialize parts and fields of ContentItem (with regard to Bindings rules i have defined earlier) with their contents.let me mention again "Everything works fine in local machine".
Here is a working example of Awesomium engine in an ASP.NET MVC application from this SO question.My code is something like the example provided.in that example IHttpModule is used to do download task but in my case a class drived from ISingletonDependency.
Update 2
After hours of debugging i finally figure out what is the problem.problem is Awesomium.Core dll has dependencies which asp.net server copies them to following directory:
"C:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\Temporary ASP.NET Files\\root\\8b7dd3d5\\5e9b4019\\assembly\\dl3\\788672a2\\001db261_934ccf01"
in orchard dependencies copied to Dependencies folder automatically but not any type of files (only dll(s) i think).in my scenario there is a file called awesomium_process which orchard cannot copy this file to dependencies folder.and this is the exact ptoblem.
Update 3
I Added these files manually and everything is working like a charm.but still i am after a solution to copy them programmatically.
I need to detect when the url in the browser changes whether it was because of click on a link, a form post or I changed the url in code.
I need it because I'm creating an object to represent the page and I need to dispose recreate it when the url changes.
Here is what I have tried so far:
private string _pageUrl;
protected T _page = default(T);
protected T Page
{
get
{
if (_page == null || UrlHasChanged())
{
_page = GetPage<T>();
SetPageUrl();
}
return _page;
}
}
private bool UrlHasChanged()
{
var driver = Session.GetDriver();
return driver.Url != _pageUrl;
}
public void SetPageUrl()
{
_pageUrl = Session.GetDriver().Url;
}
This works in most cases but it fails when the test goes forward a page and then goes back to the initial page.
I need a way to detect when the url changes so I can reset the _page field.
I'm a Java developer, so, I search in the C# documentation what looked similar to the Java API. I think you should use the EventFiringWebDriver :
EventFiringWebDriver firingDriver = new EventFiringWebDriver(driver);
firingDriver.NavigatingBack += new EventHandler<WebDriverNavigationEventArgs>(...);
firingDriver.NavigatedBack += new EventHandler<WebDriverNavigationEventArgs>(...);
firingDriver.NavigatingForward += new EventHandler<WebDriverNavigationEventArgs>(...);
firingDriver.NavigatedForward += new EventHandler<WebDriverNavigationEventArgs>(...);
I looked at the unit tests and I found this one that may be useful for you :
http://selenium.googlecode.com/svn/trunk/dotnet/test/WebDriver.Support.Tests/Events/EventFiringWebDriverTest.cs
This is probably more of a general c# and simple threading question than it is a Facebook SDK question, but I may be wrong. But I could really use some help. I am reusing the sample code that comes with the SDK which includes a FacebookLoginDialog class. I am currently using it like this. In my GetMessages, GetFriendRequests, and other Get* classes, I always try/catch calls like this:
try
{
var result = (IDictionary<string, object>)fb.Get("/me/inbox");
}
catch (FacebookOAuthException e)
{
FacebookSession.Login();
}
Here's my login method in my FacebookSession class
public static void Login()
{
var fbLoginDialog = new FacebookLoginDialog(APP_ID, EXTENDED_PERMISSIONS);
DialogResult dr = fbLoginDialog.ShowDialog();
DisplayAppropriateMessage(fbLoginDialog.FacebookOAuthResult);
}
And here is the constructor in my FacebookLoginDialog class (this is where I have the problem)
public FacebookLoginDialog(string appId, string[] extendedPermissions, bool logout)
{
try
{
var oauth = new FacebookOAuthClient { AppId = appId };
var loginParameters = new Dictionary<string, object>
{
{ "response_type", "token" },
{ "display", "popup" }
};
if (extendedPermissions != null && extendedPermissions.Length > 0)
{
var scope = new StringBuilder();
scope.Append(string.Join(",", extendedPermissions));
loginParameters["scope"] = scope.ToString();
}
var loginUrl = oauth.GetLoginUrl(loginParameters);
if (logout)
{
var logoutParameters = new Dictionary<string, object>
{
{ "next", loginUrl }
};
System.Uri uri =
new Uri("https://www.facebook.com/logout.php?next=" +
"https://www.facebook.com/connect/login_success.html&access_token=" +
FacebookSession._accessToken);
this.navigateUrl = uri;
}
else
{
this.navigateUrl = loginUrl;
}
InitializeComponent(); // crash here... sometimes
}
catch (Exception e)
{
//Log error message
}
}
Sorry for all the code, but now the problem. This code works fine the first time through. If I go to my facebook applications permissions page in Facebook and remove the app (that is, remove its permissions), while my desktop app here is NOT running, when I do start it up, it sees that it does not have permission and shows the login dialog. I can save the access_key and it will work just fine. But if I go to the facebook apps page and yank the permissions while my desktop app is running, then bad things happen. I get an error message about the activex control cannot be instantiated because the current thread is not in a single-threaded apartment. I have seen many posts here that say all you have to do is put [STAThread] above your main(), and my code has that. I have also tried creating a new thread to call the FacebookLoginDialog, but not only did that not work, but since my code is really not designed to run in multiple threads, that started causing more problems.
Is there a simple solution to all this, or do I need to redesign my code so that it properly runs in multiple threads? Or should I just live with the program crashing in those few instances when someone monkeys with the facebook permissions while my app is running?