How to set geolocation in headless chrome mode? - c#

I need to run UI autotests in headless mode in chrome browser. But the standard settings
options.AddUserProfilePreference("profile.default_content_setting_values.geolocation", 1);
options.AddUserProfilePreference("profile.managed_default_content_settings.geolocation", 1);
in headless mode do not work.
I read that we can set it to manual geolocation by emulating actions in devtools.
My code C#:
var devTools = Driver as IDevTools;
var session = devTools!.GetDevToolsSession();
var typeList = new[] { PermissionType.Geolocation };
var commandPermission = new GrantPermissionsCommandSettings();
commandPermission.Permissions = typeList;
commandPermission.Origin = "https://www.gps-coordinates.net/my-location";
session.SendCommand(commandPermission);
var command = new SetGeolocationOverrideCommandSettings();
command.Latitude = 35.689487;
command.Longitude = 139.691706;
command.Accuracy = 100;
session.SendCommand(command);
But unfortunately it doesn't work.
Could you suggest what could be the problem?
**
UPDATED
**
As a result, the code above worked, but I still could not click on the button, due to the lock screen with a message about permission to determine geolocation.
As a result, with the help of a JS script, I was able to set the geolocation
IJavaScriptExecutor js = (IJavaScriptExecutor)Driver;
js.ExecuteScript("navigator.geolocation.getCurrentPosition = (cb) => {cb({ coords: { latitude: 35, longitude: 139 } })}");

That should be possible using the Chrome-devtoools-protocoll method Emulation.setGeolocationOverride (see https://chromedevtools.github.io/devtools-protocol/tot/Emulation/#method-setGeolocationOverride)
That's actually what you're doing. Maybe 1000 is too big for accuracy, try using 1

Related

Frame # not found when using Puppeteer

I'm having issues with Puppeteer, I am trying to type in a textbox that is in an IFrame.
I have created a simple repo with a code snippet, this one contains an IFrame with a tweet from Twitter.
await new BrowserFetcher().DownloadAsync(BrowserFetcher.DefaultChromiumRevision);
var launchOptions = new LaunchOptions
{
Headless = false,
DefaultViewport = null
};
launchOptions.Args = new[] { "--disable-web-security", "--disable-features=IsolateOrigins,site-per-process" };
ChromeDriver = await Puppeteer.LaunchAsync(launchOptions);
page = await ChromeDriver.NewPageAsync();
await page.GoToAsync(Url, new NavigationOptions { WaitUntil = new WaitUntilNavigation[] { WaitUntilNavigation.Networkidle0 } });
var selectorIFrame = "#twitter_iframe";
var frameElement1 = await page.WaitForSelectorAsync(selectorIFrame);
var frame1 = await frameElement1.ContentFrameAsync();
var frameContent1 = await frame1.GetContentAsync();
var frame1 = await frameElement1.ContentFrameAsync(); fails with Frame # not found, see image with error below.
Versions:
PuppeteerSharp 7.0
.Net Framework 6
Git example
Try to disable some of the security features that can be disabled when launching puppeteer.
Check in puppeteer chrome://flags/ in case there's something blocking iframe access, maybe is insecure content or maybe you have to be explicit about isolation trial.
My 2 cents on this, it should allow it to access it from non secure
Args = new[]
{
"--disable-web-security",
"--disable-features=IsolateOrigins,site-per-process,BlockInsecurePrivateNetworkRequests",
"--disable-site-isolation-trials"
}

Is it possible to click button with PuppeteerSharp in headless mode?

I'm developing bot. He is working normally when headless mode is set to false. Whenever I start it with headless mode set to true, it throws timeout errors because it didn't find my selectors.
I thought it would be maybe because of different resolution in both modes. So I set static default viewport. It fixed nothing.
Is it even possible to click with pupeeteer in headless mode? I would like to achieve that so I don't have multiple chromes open.
Creating browser
_browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = true,
ExecutablePath = #"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe",
Args = new[] { "--disable-web-security", "--disable-infobars" },
DefaultViewport = new ViewPortOptions { Height = 1080, Width = 1920},
}) ;
var pagesAsync = await _browser.PagesAsync();
_page = pagesAsync.FirstOrDefault();
const string logInButtom = "#__layout > div > nav > div.uinfo-wrapper.flex > div.login-btn-wrap > button";
await _page.WaitForSelectorAsync(logInButtom);
await _page.ClickAsync(logInButtom);
System.Threading.Thread.Sleep(1500);
Debug.WriteLine("Succesfull login show");
Here is piece of code that works headless = false. Doesnt work headless = true
await _page.WaitForSelectorAsync(logInButtom); throws time out.

Your connection is not secure - using Selenium.WebDriver v.3.6.0 + Firefox v.56

I'm writing tests with Selenium + C# and I face an important issue because I didn't found solution when I test my site with secure connection (HTTPS). All solutions I found on stackoverflow are out of date or doesn't work.
I tried to exercise all solutions from below question:
Selenium Why setting acceptuntrustedcertificates to true for firefox driver doesn't work?
But they did not help me solve the problem
Nor is it the solution of using Nightly FireFox.
Still, when the selenium loading Firfox browser, I see the page: "Your connection is not secure".
Configuration:
Firefox v56.0
Selenium.Firefox.WebDriver v0.19.0
Selenium.WebDriver v3.6.0
my code is:
FirefoxOptions options = new FirefoxOptions();
FirefoxProfile profile = new FirefoxProfile();
profile.AcceptUntrustedCertificates = true;
profile.AssumeUntrustedCertificateIssuer = false;
options.Profile = profile;
driver = new FirefoxDriver(FirefoxDriverService.CreateDefaultService() , options , TimeSpan.FromSeconds(5));
Drivers.Add(Browsers.Firefox.ToString() , driver);
Thank for your help!
Updates to my question here:
Note 1: To anyone who has marked my question as a duplicate of this question:
Firefox selenium webdriver gives “Insecure Connection”
I thought that it is same issue, but I need solution for C#, I try match your JAVA code to my above code
First, I changed to TRUE the below statment:
profile.AssumeUntrustedCertificateIssuer = true;
second, I create new FF profile ("AutomationTestsProfile")
and try to use it:
Try 1:
FirefoxProfile profile = new FirefoxProfileManager().GetProfile("AutomationTestsProfile");
try 2:
FirefoxProfile profile = new FirefoxProfile("AutomationTestsProfile");
I Run 2 options, but still the issue exists.
Note 2: I attached screenshot of my problem, it appears when the driver try to enter text to user-name on login page.
I noticed that when I open my site with FF, Firefox displays a lock icon with red strike-through red strikethrough icon in the address bar,
but near the username textbox not appears the msg:
"This connection is not secure. Logins entered here could be compromised. Learn More" (as you writed on the duplicate question),
So maybe there is a different problem?
You are setting the properties on the profile. The FirefoxOptions has a property AcceptInsecureCertificates, set that to true.
Forget the profile, this is what you want:
var op = new FirefoxOptions
{
AcceptInsecureCertificates = true
};
Instance = new FirefoxDriver(op);
For me, the profile setting AcceptUntrustedCertificates was not enough, I also had to set option security.cert_pinning.enforcement_level. My startup looks like
// no idea why FirefoxWebDriver needs this, but it will throw without
// https://stackoverflow.com/questions/56802715/firefoxwebdriver-no-data-is-available-for-encoding-437
CodePagesEncodingProvider.Instance.GetEncoding(437);
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
var service = FirefoxDriverService.CreateDefaultService(Environment.CurrentDirectory);
service.FirefoxBinaryPath = Config.GetConfigurationString("FirefoxBinaryPath"); // path in appsettings
var options = new FirefoxOptions();
options.SetPreference("security.cert_pinning.enforcement_level", 0);
options.SetPreference("security.enterprise_roots.enabled", true);
var profile = new FirefoxProfile()
{
AcceptUntrustedCertificates = true,
AssumeUntrustedCertificateIssuer = false,
};
options.Profile = profile;
var driver = new FirefoxDriver(service, options);
It works for me for following settings (same as above):
My env:
win 7
firefox 61.0.2 (64-bit)
Selenium C# webdriver : 3.14.0
geckodriver-v0.21.0-win32.zip
==============================
FirefoxOptions options = new FirefoxOptions();
options.BrowserExecutableLocation = #"C:\Program Files\Mozilla Firefox\firefox.exe";
options.AcceptInsecureCertificates = true;
new FirefoxDriver(RelativePath,options);

Selenium can't handle multiple ChromiumWebBrowser instances in C#

I have two instances of the ChromiumWebBrowser in my WinForms project (Visual Studio 2012). My goal is to have the second browser instance "copy" the behavior of the user input in the first browser instance. I can successfully retrieve the input from the first browser, and I managed to hook up Selenium in the project as well.
However, I'm having one issue. Whenever Selenium sends its commands, the first browser is the one that responds to them. For the life of me, I can't seem to figure out how to make the second browser respond. Whenever I completely remove the first browser, the second one starts responding correctly, but adding the first browser again will make only have the first browser use the Selenium commands. I even tried to switch out the moments the browsers are added to the form, but to no avail: whenever there are two available, the wrong one is responsive.
Relevant code:
public BrowserManager(Controller controller, string startingUrl)
{
_controller = controller;
var settings = new CefSettings { RemoteDebuggingPort = 9515 };
Cef.Initialize(settings);
// Input browser
inputBrowser = new ChromiumWebBrowser(startingUrl);
var obj = new XPathHelper(this);
inputBrowser.RegisterJsObject("bound", obj); //Standard object registration
inputBrowser.FrameLoadEnd += obj.OnFrameLoadEnd;
// Output browser
var browserSettings = new BrowserSettings();
var requestContextSettings = new RequestContextSettings { CachePath = "" };
var requestContext = new RequestContext(requestContextSettings);
outputBrowser = new ChromiumWebBrowser(startingUrl);
outputBrowser.RequestContext = requestContext;
outputBrowser.AddressChanged += InitializeOutputBrowser;
outputBrowser.Enabled = false;
outputBrowser.Name = "outputBrowser";
}
The selenium part:
public class SeleniumHelper
{
public SeleniumHelper()
{
DoWorkAsync();
}
private Task DoWorkAsync()
{
Task.Run(() =>
{
string chromeDriverDir = #"ActionRecorder\bin\x64\Debug\Drivers";
var chromeDriverService = ChromeDriverService.CreateDefaultService(chromeDriverDir);
chromeDriverService.HideCommandPromptWindow = true;
ChromeOptions options = new ChromeOptions();
options.BinaryLocation = #"ActionRecorder\bin\x64\Debug\ActionRecorder.exe";
options.DebuggerAddress = "127.0.0.1:9515";
options.AddArguments("--enable-logging");
using (IWebDriver driver = new OpenQA.Selenium.Chrome.ChromeDriver(chromeDriverService, options))
{
driver.Navigate().GoToUrl("http://www.google.com");
var query = driver.FindElement(By.Name("q"));
query.SendKeys("A google search test");
query.Submit();
}
});
return null;
}
}
And finally, a screenshot for some visualization:
Some help with the issue would be very much appreciated. If i missed some crucial info, feel free to ask for it. Thanks in advance!
Greetz,
Tybs
The behavior is correct. You have one debug address and you can only have one debug address for CEF. Which means when you use Selenium it is only seeing one browser.
By default Selenium will send an command to current active Tab or Window. Now in your case you have multiple Chrome view embedded, but they are technically Chrome Tab/Windows which you have placed on the same form.
So if you are in luck below code in should be able to move you to the Window you are interested in
driver.SwitchTo().Window(driver.WindowHandles.Last());
See if it works. If it doesn't then your only other workaround would be to change the order of Adding ChromiumWebBrowser and that should reverse the window it works on.
Below are some important threads that you should read from top to bottom. Very relevant to your issue/request
https://code.google.com/archive/p/chromiumembedded/issues/421
https://github.com/cefsharp/CefSharp/issues/1076

PhantomJS click link not working

I am trying to get the URL of the second page of a yellowpages result with the following code:
var driverService = PhantomJSDriverService.CreateDefaultService();
var driver = new PhantomJSDriver(driverService);
driver.Navigate().GoToUrl(new Uri("http://www.yellowpages.com/los-angeles-ca/pizza?g=Los+Angeles%2C+CA"));
string url = driver.Url;
var next = driver.FindElementByCssSelector(".next");
next.Click();
string newUrl = driver.Url;
The "next" link is found and clicked but I do not get the new URL after calling next.Click().
Other pages work fine. I am only having problems on yellowpages right now.
Any ideas?
Try this for clicking on the web element instead of using click():
JavascriptExecutor js = (JavascriptExecutor)driver;
js.executeScript("arguments[0].click();", next);
Make sure you have turned on console output, so you could see the exact error:
service.HideCommandPromptWindow = true;
I had similar problem, and when i turned on console output I noticed the following error: "can't find variable: __doPostBack".
In my case, that was because of site declined defaut Phantom's user agent, so I had to change it (based on this answer).

Categories