I need to detect when the url in the browser changes whether it was because of click on a link, a form post or I changed the url in code.
I need it because I'm creating an object to represent the page and I need to dispose recreate it when the url changes.
Here is what I have tried so far:
private string _pageUrl;
protected T _page = default(T);
protected T Page
{
get
{
if (_page == null || UrlHasChanged())
{
_page = GetPage<T>();
SetPageUrl();
}
return _page;
}
}
private bool UrlHasChanged()
{
var driver = Session.GetDriver();
return driver.Url != _pageUrl;
}
public void SetPageUrl()
{
_pageUrl = Session.GetDriver().Url;
}
This works in most cases but it fails when the test goes forward a page and then goes back to the initial page.
I need a way to detect when the url changes so I can reset the _page field.
I'm a Java developer, so, I search in the C# documentation what looked similar to the Java API. I think you should use the EventFiringWebDriver :
EventFiringWebDriver firingDriver = new EventFiringWebDriver(driver);
firingDriver.NavigatingBack += new EventHandler<WebDriverNavigationEventArgs>(...);
firingDriver.NavigatedBack += new EventHandler<WebDriverNavigationEventArgs>(...);
firingDriver.NavigatingForward += new EventHandler<WebDriverNavigationEventArgs>(...);
firingDriver.NavigatedForward += new EventHandler<WebDriverNavigationEventArgs>(...);
I looked at the unit tests and I found this one that may be useful for you :
http://selenium.googlecode.com/svn/trunk/dotnet/test/WebDriver.Support.Tests/Events/EventFiringWebDriverTest.cs
Related
If you run the following code, then at each iteration of the cycle, the browser will bring up on the front and get focus.
public class Program
{
private static void Main()
{
var driver = new ChromeDriver();
driver.Navigate().GoToUrl("https://i.imgur.com/cdA7SBB.jpg");
for (int i = 0; i < 100; i++)
{
var ss = ((ITakesScreenshot)driver).GetScreenshot();
ss.SaveAsFile("D:/imgs/i.jpg");
}
}
}
The question is: why does this happen and can it be turned off? headless mod does not fit.
It seems that this always happens when Selenium needs to save / read the file or start the process.
To take a screenshot, chromedriver activates the window. It's by design and there's no option to avoid it even though it's technically possible.
For the relevant sources have a look at window_commands.cc.
You could however avoid the effect by moving the window off-screen:
driver.Manage().Window.Position = new Point(-32000, -32000);
or by launching the browser off-screen:
var options = new ChromeOptions();
options.AddArgument("--window-position=-32000,-32000");
UPDATE
You can avoid the activation by taking the screenshot directly via the devtool API. Here's a class to override GetScreenshot:
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.Remote;
using JObject = System.Collections.Generic.Dictionary<string, object>;
class ChromeDriverEx : ChromeDriver
{
public ChromeDriverEx(ChromeOptions options = null)
: base(options ?? new ChromeOptions()) {
var repo = base.CommandExecutor.CommandInfoRepository;
repo.TryAddCommand("send", new CommandInfo("POST", "/session/{sessionId}/chromium/send_command_and_get_result"));
}
public new Screenshot GetScreenshot() {
object response = Send("Page.captureScreenshot", new JObject {{"format", "png"}, {"fromSurface", true}});
string base64 = (string)((JObject)response)["data"];
return new Screenshot(base64);
}
protected object Send(string cmd, JObject args) {
return this.Execute("send", new JObject {{"cmd", cmd}, {"params", args}}).Value;
}
}
usage:
var driver = new ChromeDriverEx();
driver.Url = "https://stackoverflow.com";
driver.GetScreenshot().SaveAsFile("/tmp/screenshot.png");
driver.Quit();
When you invoke Navigate().GoToUrl("url") method through your Automation script, it is expected that your script will be interacting with some of the elements on the webpage. So for Selenium to interact with those elements, Selenium needs focus. Hence opening up the browser, bring up on the front and getting the focus is the default phenomenon implemented through Navigate().GoToUrl("url").
Now Default Mode or Headless Mode is controlled by the ChromeOption/FirefoxOptions class which is passed as an argument while initializing the WebDriver instance and will call Navigate().GoToUrl("url"). So, Navigate().GoToUrl("url") would have no impact how the WebDriver instance is controlling the Mode of Operation i.e. Default Mode or Headless Mode.
Now when you try to invoke the method from ITakesScreenshot Interface i.e. ITakesScreenshot.GetScreenshot Method which is defined as :
Gets a Screenshot object representing the image of the page on the screen.
In case of WebDriver instance which extends ITakesScreenshot, makes the best effort depending on the browser to return the following in order of preference:
Entire page
Current window
Visible portion of the current frame
The screenshot of the entire display containing the browser
There may be some instances when the browser looses the focus. In that case you can use IJavascriptExecutor to regain the focus as follows :
((IJavascriptExecutor) driver).executeScript("window.focus();");
I was struggling with an issue when generic GetScreenshot() in parallel testing was causing browser to lose focus. Some elements were being removed from DOM and my tests were failing. I've come up with a working solution for Edge and Chrome 100+ with Selenium 4.1:
public Screenshot GetScreenshot()
{
IHasCommandExecutor executor = webDriverInstance as IHasCommandExecutor;
var sessionId = ((WebDriver)webDriverInstance).SessionId;
var command = new HttpCommandInfo(HttpCommandInfo.PostCommand, $"/session/{sessionId}/chromium/send_command_and_get_result");
executor.CommandExecutor.TryAddCommand("Send", command);
var response = Send(executor, "Page.captureScreenshot", new JObject { { "format", "png" }, { "fromSurface", true } });
var base64 = ((Dictionary<string, object>)response.Value)["data"];
return new Screenshot(base64.ToString());
}
private Response Send(IHasCommandExecutor executor, string cmd, JObject args)
{
var json = new JObject { { "cmd", cmd }, { "params", args } };
var command = new Command("Send", json.ToString());
return executor.CommandExecutor.Execute(command);
}
I am using GeckoFx to perform a login to a specific website. This website edits the page with new information should the login fail (or require additional authentication, such as a ReCaptcha). Unfortunately, it is vital that I have access an event when the page is updated. I have tried numerous approaches mainly
A continual check if the uri is still the same upon each login attempt and a subsequent check on the specific element in question (to see if the display: none property was changed. (This resulted in an infinite loop as it seems GeckoFx updates the page in a nonblocking way, causing the program to go into an infinite loop)
Sleeping for ~5 seconds between login requests and using the aforementioned uri check. All this did (predictably, I was grasping at straws) was freeze the browser for 5 seconds and still fail to update the page
Searching the GeckoFx codebase for a specific event when the page is updated similar to the DocumentCompleted event (no such luck).
The most common approach I have read about (and one that makes the most sense) is to use a MutationObserver. It seems that all of the answers across the internet involve injecting Javascript in order to perform the requisite task. Seeing as all of my programming background has not touched web development whatsoever, I'm trying to stick to what I know.
Here is my approach so far, unfortunately, it is not much.
public class GeckoTestWebLogin
{
private readonly string _user;
private readonly string _pass;
public GeckoWebBrowser Gweb;
public Uri LoginUri { get; } = new Uri("https://website.com/login/");
public bool LoginCompleted { get; private set; } = false;
public bool Loaded { get; private set; } = false;
public GeckoTestWebLogin(string user, string pass)
{
_user = user;
_pass = pass;
Xpcom.EnableProfileMonitoring = false;
Xpcom.Initialize("Firefox");
//this code is for testing purposes, it will be removed upon project completion
CookieManager.RemoveAll();
Gweb = new GeckoWebBrowser();
Gweb.DocumentCompleted += DocLoaded;
//right about here is where I get lost, where can I set a callback method for the observer to report back to? Is this even how it works?
MutationObserver mutationObserver = new MutationObserver(Gweb.Window.DomWindow, (nsISupports)Gweb.Document.DomObject);
}
private void TestObservedEvent(string parms, object[] objs)
{
MessageBox.Show("The page was changed # " + DateTime.Now);
}
public void DocLoaded(object obj, GeckoDocumentCompletedEventArgs e)
{
Loaded = true;
if (Gweb.Url != LoginUri) return;
AttemptLogin();
}
private void AttemptLogin()
{
GeckoElementCollection elements = Gweb.Document.GetElementsByTagName("input");
foreach (GeckoHtmlElement element in elements)
{
switch (element.Id)
{
case "username":
element.SetAttribute("value", _user);
break;
case "password":
element.SetAttribute("value", _pass);
break;
case "importantchangedinfo":
GeckoHtmlElement authcodeModal =
(GeckoHtmlElement)
Gweb.Document.GetElementsByClassName("login_modal").First();
if (authcodeModal.Attributes["style"].NodeValue != "display: none")
{
InputForm form = new InputForm { InputDescription = "Captcha Required!" };
form.ShowDialog();
elements.FirstOrDefault(x => x.Id == "captchabox")?.SetAttribute("value", form.Input);
}
break;
}
}
elements.FirstOrDefault(x => x.Id == "Login")?.Click();
}
public void Login()
{
//this will cause the DocLoaded event to fire after completion
Gweb.Navigate(LoginUri.ToString());
}
}
As stated in the above code in the comments, I am completely lost at
MutationObserver mutationObserver = new MutationObserver(Gweb.Window.DomWindow, (nsISupports)Gweb.Document.DomObject);
I can't seem to find anything in GeckoFx's source for MutationObserver that would allow me to set a callback/event/whathaveyou. Is my approach the correct way to go about things or am I left with no options other than to inject Javascript into the page?
Much appreciated, thank you in advance.
Here is my attempt at option 2 in Tom's answer:
(Added into GeckoTestWebLogin)
public void DocLoaded(object obj, GeckoDocumentCompletedEventArgs e)
{
Loaded = true;
if (Gweb.Url != LoginUri) return;
MutationEventListener mutationListener = new MutationEventListener();
mutationListener.OnDomMutation += TestObservedEvent;
nsIDOMEventTarget target = Xpcom.QueryInterface<nsIDOMEventTarget>(/*Lost here*/);
using (nsAString modified = new nsAString("DOMSubtreeModified"))
target.AddEventListener(modified, mutationListener, true, false, 0);
AttemptLogin();
}
MutationEventListener.cs:
public delegate void OnDomMutation(/*DomMutationArgs args*/);
public class MutationEventListener : nsIDOMEventListener
{
public event OnDomMutation OnDomMutation;
public void HandleEvent(nsIDOMEvent domEvent)
{
OnDomMutation?.Invoke(/*new DomMutationArgs(domEvent, this)*/);
}
}
I don't think Geckofx's webidl compiler is currently advanced enough to generate the callback constructor.
Option 1. - Enhance MutationObserver source.
You could modify MutationObserver source manually to add the necessary constructor callback. Then recompile Geckofx. (I haven't look to see how difficult this is)
Option 2. - Use old style Mutation events.
public class DOMSubtreeModifiedEventListener : nsIDOMEventListener
{
... // Implement HandleEvent
}
Then something like (maybe in DocumentCompleted event handler):
_domSubtreeModifiedEventListener = new DOMSubtreeModifiedEventListener(this);
var target = Xpcom.QueryInterface<nsIDOMEventTarget>(body);
using (nsAString subtreeModified = new nsAString("DOMSubtreeModified"))
target.AddEventListener(subtreeModified, _domSubtreeModifiedEventListener, true, false, 0);
Option 3. - Use Idle + Check.
Add an winforms Application.idle event handler - and examine the document, to know when its ready.
Option 4. - Inject a javascript callback.
(As you have already mentioned) - This example is waiting until after a resize is done.
basically inject: "<body onresize=fireResizedEventAfterDelay()>" : then inject something like this:
string fireResizedEventAfterDelayScript = "<script>\n" +
"var resizeListner;" +
"var msDelay = 20;" +
"function fireResizedEventAfterDelay() {" +
"clearTimeout(resizeListner);" +
"resizeListner = setTimeout(function() { document.dispatchEvent (new MessageEvent('resized')); }, msDelay);" +
"}\n" +
"</script>\n";
Then in the C#:
browser.AddMessageEventListener("resized", (s) => runafterImDone())
I am using Gecko Web browser version 21.0.1 and .net Framework 4.0 in my windows application.
When I navigate to certain web pages I get Pop up confirm message:
This web page is being redirected to a new location. Would you like to
resend the form data you have typed to the new location?
How can I disable this kind of messages?
So far I have tried the following settings, but they didn't help:
GeckoPreferences.User["security.warn_viewing_mixed"] = false;
GeckoPreferences.User["plugin.state.flash"] = 0;
GeckoPreferences.User["browser.cache.disk.enable"] = false;
GeckoPreferences.User["browser.cache.memory.enable"] = false;
You could try providing you own nsIPromptService2 / nsIPrompt implementation.
Run this early on program start up (Although after XPCom.Initalize)
PromptFactory.PromptServiceCreator = () => new FilteredPromptService();
Where FilteredPromptService is defined something like this:
internal class FilteredPromptService : nsIPromptService2, nsIPrompt
{
private static PromptService _promptService = new PromptService();
public void Alert(nsIDOMWindow aParent, string aDialogTitle, string aText)
{
if(/*want default behaviour */)
{
_promptService.Alert(aDialogTitle, aText);
}
// Else do nothing
}
// TODO: implement other methods in similar fashion. (returning appropriate return values)
}
You will also need to make sure that error pages are not enabled:
GeckoPreferences.User["browser.xul.error_pages.enabled"] = false;
I know there are other ways of checks and questions like this but still trying to debug my piece of code. It checks if user entered the url without "http://"
//page is a global variable, a string in this case
//loadPage(string url) loads the requested page;
private void checkIfUrlRight(string s)
{
if (!s.StartsWith("http://"))
{
if (!s.Contains("www."))
{
s += "http://www.";
page = s;
}
else
{
s += "http://";
page = s;
}
urlRTextBox.Text = page;
loadPage(page);
}
else
page = s;
urlRTextBox.Text = page;
loadPage(page);
}
I get an error when loading a page, it says that Url is wrong. Will actually my code make any sense or shall I just switch to complicated stuff like regex (looked through the web, looks harsh, don't where to start) or playing around with c# Uri class? Any suggestions?
Thanks in advance.
There is a method in the Uri class that validates certain uri, you can use it as follows:
void checkIfUrlRight(string s)
{
if (Uri.IsWellFormedUriString(s, UriKind.RelativeOrAbsolute))
{
urlRTextBox.Text = page;
loadPage(page);
}
}
EDIT:
Note that actually not every URI is URL (described here). But I believe for your case (when all URLs are URIs) it is the simplest way to validate.
I need to write a custom "UrlRewriter" using a HttpModule, in the moment of "rewriting" I need access to the Session and has followed the advice from another SO thread:
Can I access session state from an HTTPModule?
Everything works, except the RewritePath/Redirect part. I don't get any exceptions, but the browser takes forever to load. Is this really the best way to build a urlrewriter like this?
using System;
using System.Web;
using System.Web.SessionState;
using System.Diagnostics;
namespace MyCompany.Campaigns
{
public class CampaignRewriteModule : IHttpModule
{
public void Init(HttpApplication application)
{
application.PostAcquireRequestState += new EventHandler(Application_PostAcquireRequestState);
application.PostMapRequestHandler += new EventHandler(Application_PostMapRequestHandler);
}
void Application_PostMapRequestHandler(object source, EventArgs e)
{
HttpApplication app = (HttpApplication)source;
if (app.Context.Handler is IReadOnlySessionState || app.Context.Handler is IRequiresSessionState)
{
return;
}
app.Context.Handler = new MyHttpHandler(app.Context.Handler);
}
void Application_PostAcquireRequestState(object source, EventArgs e)
{
HttpApplication app = (HttpApplication)source;
MyHttpHandler resourceHttpHandler = HttpContext.Current.Handler as MyHttpHandler;
if (resourceHttpHandler != null)
{
HttpContext.Current.Handler = resourceHttpHandler.OriginalHandler;
}
Debug.Assert(app.Session != null);
string path = HttpUtils.Path();
if (!CampaignCodeMethods.IsValidCampaignCode(path)) return;
string domain = HttpUtils.Domain();
CampaignCode code = CampaignManager.RegisterCode(path, domain.Equals(Config.Instance.Domain.ToLower()) ? null : domain);
if (code != null)
{
//app.Context.RewritePath(code.CampaignCodePath.Path, false);
app.Context.Response.Redirect(code.CampaignCodePath.Path, true);
}
}
public void Dispose() { }
public class MyHttpHandler : IHttpHandler, IRequiresSessionState
{
internal readonly IHttpHandler OriginalHandler;
public MyHttpHandler(IHttpHandler originalHandler)
{
OriginalHandler = originalHandler;
}
public void ProcessRequest(HttpContext context)
{
throw new InvalidOperationException("MyHttpHandler cannot process requests.");
}
public bool IsReusable
{
get { return false; }
}
}
}
}
I think I know what it is. Your module is executed on ALL requests and assigns a handler that throws an error unless there is a valid campaign code (where a rewrite/redirect occurs).
But because this is not just for your "handler campaign code" url it is causing an error to be thrown, which is causing you to be redirected to your error page, which is being caught by the module, which is assigning the handler, which is throwing an error, which is redirecting... I think you get where I'm going ;)
Otherwise I'd try a few things:
Setup Fiddler and check for an infinite redirect loop
Put a breakpoint on app.Context.Response.Redirect - make sure your not in an infinite loop
Put a breakpoint on MyHttpHandler.ProcessRequest - make sure it's not being called and the exception swallowed
I wrote a simple URL rewriter module that did something similar. The url rewriting is done in BeginRequest by comparing the requested url to a list of known urls. If we find a mach we use HttpContext.RewritePath to change the requested url.
This appears to work well with no serious side effects.
I notice that you use Response.Redirect instead of Context.RewritePath. Using Redirect will cause the users browser to request a new page with the new url. Is this really what you want? The user will then see the new url in his browser. If this really is what you want you could use an alternative approach where you use a custom 404 page not found error handler to redirect the user to the appropriate page.
If you set up IIS to redirect all 404 errors to a new page, say Custom404.aspx, that you have set up. In this page you can check the requested url to see if the url should be rewritten. If it should you can simply set the Response.Status to "301 Moved Permanently" and write a header with the name "Location" and the new url as the value. If the url should not be rewritten you can just output the standard 404 page not found error.
This last approach works well, but as with your Response.Redirect approach the user will see the new url in his browser. Using Context.RewritePath allows you to serve a different page than the one requested.
Is your URL rewriter handling requests that aren't for an actual page? If they are, then I don't think you can access Session... the last URL rewriter that I had written was there to handle 404 errors, and I remember digging around and finding (somewhere, can't remember where) that you don't get access to Session if that request is not for an actual .aspx page.
I'm thinking the problem may be inside this block:
if (code != null)
{
//app.Context.RewritePath(code.CampaignCodePath.Path, false);
app.Context.Response.Redirect(code.CampaignCodePath.Path, true);
}
Try putting a breakpoint in the if statement and see if continually gets hit.
I think there should be a call to 'return' after you reset it to the original handler, else you will continually rewrite the path.
Thinking about it, that's probably why the page is loading forever! :)