Selenium webdriver + C# - c#

I tried to ask question and i was asked to be specific so here I'll try to be specific as much as possible.
1)Check that not less than 4 news are displayed into the news section
-Check, that text “© 2019 Belarusian Railway” is displayed at the
bottom of the page
Check, that 5 buttons are present in the top part of the page: “Press
Center”, “Timetable”, “Passenger Services”, “Freight”, “Corporate”
here I tried to use location property with X and Y coordinates but
after I asked question to my mentor he said I won't be able to
determine it with this approach, so I have no idea how to check where
element is located
Type in “Поиск” input 20 random symbols (they should be different
for each execution). I know about sendKeys method but how do I make
my input random for each run
Check, that address in browser changed to the
https://www.rw.by/search/?s=Y&q=
Check, that text “К сожалению, на ваш поисковый запрос ничего не
найдено.” is displayed on a page
Clear entered random value from corresponding input, and enter
"Санкт-Петербург" instead
click the find button. I know how to do this step
Chech, that 15 links to the articles are displayed on the screen. No
idea how to do this
Write into the console text from that links
Here is the code for the tasks i managed to complete
[Test]
public void Test1()
{
var inputField = driver.FindElement(_inputField);
inputField.SendKeys("белорусская железная дорога");
var searchBtn = driver.FindElement(_searchButton);
Thread.Sleep(1000);
searchBtn.Click();
var targetUrl = driver.FindElement(_targetLink);
targetUrl.Click();
Assert.Pass();
}
}

Related

Cannot access same source HTML as a browser

I am coming back to work on a BOT that scraped data from a site once a day for my personal use.
However they have changed the code during COVID and now it seems they are loading in a lot of the content with Ajax/JavaScript.
I thought that if I did a WebRequest and obtained the response HTML from a URL, it should match the same content that I see in a browser (FF/Chrome) when I right click and "view source". I thought the actual DOM and generated source code would come later when those files were loaded as onload events fired, scripts lazily loaded and so on.
However the source HTML I obtain with my BOT is NOT the same as the HTML I see when viewing the source code. So my regular expressions that find certain links are not available to me.
Why am I seeing a difference between "view source" and a download of the HTML?
I can only think that when the page loads, SCRIPTS run that load other content into the page and that when I view source I am actually seeing a partial generated source rather than the original source code. Therefore is there a way I can call the page with my BOT, wait X seconds before obtaining the response to get this "onload" generated HTML?
Or even better a way for MY BOT (not using someone elses), to view generated source.
This BOT runs as a web service. I can find another site to scrape but it's just painful when I have all the regular expressions working on the source I see, except it's NOT the source my BOT obtains.
A bit confused at why my browser is showing me more content with a view source (not generated source), than my BOT gets when making a valid request.
Any help would be much appreciated this is almost an 8 year project that I have been doing on/off and this change has ruined one of the core parts of the system.
In response to OP's comment, here is the Java code for how to click at different parts on the screen to do this:
You could use Java's Robot class. I just learned about it a few days ago:
// Import
import java.awt.Robot;
// Code
void click(int x, int y, int btn) {
Robot robot = new Robot();
robot.mouseMove(x, y);
robot.mousePress(btn);
robot.mouseRelease(btn);
}
You would then run the click function with the x and y position to click, as well as the button (MouseEvent.BUTTON1, MouseEvent.BUTTON2, etc.)
After stringing together the right positions (this will vary depending on the screen) you could do just about anything.
To use shortcuts, just use the keyPress and keyRelease functions. Here is a good way to do this:
void key(int keyCode, boolean ctrl, boolean alt, boolean shift) {
if (ctrl)
robot.keyPress(KeyEvent.VK_CONTROL);
if (alt)
robot.keyPress(KeyEvent.VK_ALT);
if (shift)
robot.keyPress(KeyEvent.VK_SHIFT);
robot.keyPress(keyCode);
robot.keyRelease(keyCode);
if (ctrl)
robot.keyRelease(KeyEvent.VK_CONTROL);
if (alt)
robot.keyRelease(KeyEvent.VK_ALT);
if (shift)
robot.keyRelease(KeyEvent.VK_SHIFT);
}
Thus, something like Ctrl+Shift+I to open the inspect menu would look like this:
key(KeyEvent.VK_I, true, false, true);
Here are the steps to copy a website's code (from the inspector) with Google Chrome:
Ctrl + Shift + I
Right click the HTML tag
Select "Edit as HTML"
Ctrl + A
Ctrl + C
Then, you can use the technique from this StackOverflow to get the content from the clipboard:
Clipboard c = Toolkit.getDefaultToolkit().getSystemClipboard();
String text = (String) c.getData(DataFlavor.stringFlavor);
Using something like FileOutputStream to put the info into a file:
FileOutputStream output = new FileOutputStream(new File( PATH HERE ));
output.write(text.getBytes());
output.close();
I hope this helps!
I have seemed to have fixed it by just turning on the ability to store cookies in my custom HTTP (Bot/Scraper) class, that was being called from the class trying to obtain the data. Probably the site has a defense system to prevent visitors requesting pages and not the JS/CSS with a different session ID on each request.
However I would like to see some other examples because if it is just cookies then they could use JavaScript to test for JavaScript e.g an AJAX call to log if JS is actually on or some DOM manipulation to determine if you are really Human or not which would break it again.
Every site uses different methods to prevent scrapers, email harvesters, job rapists, link harvesters etc inc working out the standard time between requests for 100% verifiable humans and BOTS and then using those values to help determine spoofed user-agents etc. I wrote a whole system to stop BOTS at my last place of work and its a layered approach, just glad the cookies being enabled solved it on this site but it could easily be beefed up with other tricks to test for BOTS vs HUMANS.
I do know some Java, enough to work out what is going on anyway. My BOT is in C#.

Can not take a full page screenshot in Applitools C#

I'm trying to take a full screen screenshot of my page by using this code:
public void OpenEyesForVisualTesting(string testName) {
this.seleniumDriver.Driver = this.eyes.Open(this.seleniumDriver.Driver, "Zinc", testName);
}
public void CheckScreenForVisualTesting() {
this.eyes.Check("Zinc", Applitools.Selenium.Target.Window().Fully());
}
public void CloseEyes() {
this.eyes.close();
}
but instead I just get a half a page of the screenshot, I tried to contact Applitools but they just told me to replace eyes.checkwindow() to eyes.Check("tag", Target.Window().Fully()); which still didn't work.
If anyone can help me that would be great.
I work for Applitools and sorry for your troubles. Maybe you did not see our replies or they went to your spam folder. You need to set ForceFullPageScreenshot = true and StitchMode = StitchModes.CSS in order to capture a full page screenshot.
The below code example is everything you'd need to do in order to capture a full page image. Also, please make sure your .Net Eyes.Sdk version is >= 2.6.0 and Eyes.Selenium >= 2.5.0.
If you have any further questions or still encountering issues with this please feel free to email me directly. Thanks.
var eyes = new Eyes();
eyes.ApiKey = "your-api-key";
eyes.ForceFullPageScreenshot = true;
eyes.StitchMode = StitchModes.CSS;
eyes.Open(driver, "Zinc", testName, new Size(800, 600)); //last parameter is your viewport size IF testing on a browser. Do not set if testing on a mobile devices.
eyes.CheckWindow("Zinc");
//or new Fluet API method
eyes.Check("Zinc", Applitools.Selenium.Target.Window().Fully();
eyes.Close();
Use extent Reporting library, in that you can take screen shot as well as an interactive report on pass and fail cases.
here is the link how it works:
http://www.softwaretestingmaterial.com/generate-extent-reports/
If you want to take complete page Screenshot/ Particular element with Applitools. You can use lines of code:-
Full Screenshot
eyes.setHideScrollbars(false);
eyes.setStitchMode(StitchMode.CSS);
eyes.setForceFullPageScreenshot(true);
eyes.checkWindow();
Take screenshot for particular element
WebElement Button = driver.findElement(By.xpath("//button[#id='login-btn']"));
eyes.setHideScrollbars(false);
eyes.setStitchMode(StitchMode.CSS);
eyes.open(driver, projectName, testName);
eyes.setMatchLevel(MatchLevel.EXACT);
eyes.checkElement(Button );
(I am from Applitools.)
When setting forcefullpagescreenshoot to true, applitools will try to scroll the HTML element in the page, if the HTML is not the scrollable element then you will need to set it yourself:
eyes.Check(Target.Window().ScrollRootElement(selector))
In Firefox, you can see a scroll tag near elements that are scrollable.

Take screenshot of full page with selenium

I have the code that takes a screenshot after every step, however it is only taking the screenshot of the current view point. I would like it to take the screenshot of the entire page.
I have looked into things like aShot but I don't know how to add it to my project.
Current code:
[AfterStep]
public void AfterStep()
{
Screenshot ss = ((ITakesScreenshot)webDriver).GetScreenshot();
string stepTitle = ScenarioContext.Current.StepContext.StepInfo.Text;
string fileTitle = DateTime.Now.ToString("yyyyMMdd_HHmmss")+ "_" + stepTitle;
string drive = (path + DateTime.Now.ToString("yyyyMMdd_HHmm") + "_" + testTitle);
string screenshotfilename = drive + "\\" + fileTitle + ".Png";
ss.SaveAsFile(screenshotfilename, ScreenshotImageFormat.Png);
}
Edit:
My problem is not to do with the action of taking a screenshot exactly. The issue I am facing is my page extends the height of the browser and therefor when I take a screenshot, it is a screenshot of the view-able page and not the entire page.
Have you tried the following at the start of your test?
webDriver.Manage().Window.Maximize();
e.g. you could include it here:
[Binding]
public class Setup
{
public static IWebDriver Driver = new ChromeDriver();
[BeforeTestRun]
public static void SetUpBrowser()
{
Driver.Manage().Window.Maximize();
}
note: it may have a slightly different syntax to 'BeforeTestRun' in Gherkin
If your browser is on top and you see it on your screen, try using C#'s screenshot feature (that will take a screenshot of the whole screen). Like clicking PrtScn button that will save the whole screen.
Implementations within the webdriver itself (like the code you showed), IMO should take a screenshot only of the webdriver, since you are asking webdriver to take a screenshot of what it handles (the browser window). If you want a full screen, just use some of the implementations used in the link below.
Capture a Screen Shot using C#
Edit: If you want to take a screenshot of the whole page even though it is not visible (and you have to scroll down to see it), it seems it is not possible as shown here:
how to get screenshot of full web page using selenium in java?
You may want to try this feature of selenium:
set_window_size(int(width), int(height))
I just set window size to 4800*2560 and captured whole page at once which I had to scroll down twice on a 27-inch screen.

Searching browser contents after automated form submission

Im using WatiN to automate form submissions on my website. The problem Im having is I cant find any documentation on how to search the page after submission and verify if a string is present or not. I need it to look for a string in the resulting page, and if it is present continue inserting the next variable in a list, if it is NOT present I need it to log what string was inserted that caused the text not to show up, preferably in a text file. Heres my code so far.
browser.GoTo("https://site.com");
// locate the form by name and fill it
browser.TextField(Find.ByName("form")).TypeText("example");
// click the submit button
browser.Button(Find.ByName("submit")).Click();
Unfortunately that's all I've gotten. Help is appreciated, I just need to be pushed in the right direction.
Use browser.ContainsText("text to find") to verify if a string is present or not.
string[] textToAddList = File.ReadAllLines(#"C:\Users\Syd\Desktop\list.txt");
browser.GoTo("https://site.com");
foreach (string textToAdd in textToAddList)
{
browser.TextField(Find.ByName("form")).TypeText(textToAdd);
browser.Button(Find.ByName("submit")).Click();
browser.WaitUntilComplete();
if (!browser.ContainsText("text to find"))
{
//Log here
break;
}
}

Emulating click based event on a web page

This link goes to an implementation of the imagination captcha imagination
The authors have themselves requested for people to make algorithms to try its efficiency against AI attacks.
Essentially the first page is asking for a mouse click anywhere on the image... My problem is that my algorithm comes up with the point (x,y) on the image but I want to emulate it real time on this link...
Can some one tell me how can i send the point values on this link and get back the message whether i was successful or not....
Essentially I am asking how can i emulate a mouse click on this link at the points that my algorithm gives using C#...
I am asking this only for studying the features of this captcha and its accuracy.
Thanks a lot
If you are able to execute JavaScript on that page directly, this code will do:
submitClick(document.getElementById("img").value, x, y, "tiled");
Otherwise, hit this url, substituting your own values for id, x, and y:
http://goldbach.cse.psu.edu/s/captcha/captcha_controller.php?id=87170&x=66&y=149&source=tiled
Parse the response - If your coordinates are correct, the response will contain "step 2". If not, the response will contain "step 1" and it will have a <div id="error">.
If you want to use their live site from code, I think you're talking about a screen scrape. I highly recommend looking into the HTML Agility Pack (available via nuget). This is going to allow you to read the DOM into your application and then interact with it however you please.
This could be a dumb answer but if you're trying to emulate a mouse click and find out if it's successful, why not use the Selenium Browser add-in to record your scripts / write' your own.
Then you can have a test suite to try against you're various different captchas.... hope this achieves what you're trying to do....
WebAii over at telerik has this feature. Here is some sample code i used at some point in the past customized for your situation. just put this in a class, left out the class container because it jacks up the formatting
protected static Manager _manager = null;
protected static Manager _manager = null;
protected Browser _main;
protected Find _find;
public WebAiiAutomater()
{
if (_manager != null)
{
foreach (var broswer in _manager.Browsers)
{
broswer.Close();
}
return;
}
var settings = new Settings(BrowserType.InternetExplorer, #"c:\log\") { ClientReadyTimeout = 60 * 1000 };
_manager = new Manager(settings);
_manager.Start();
_manager.LaunchNewBrowser();
_manager.ActiveBrowser.AutoWaitUntilReady = true;
_main = _manager.ActiveBrowser;
_find = _main.Find;
_main.NavigateTo(#"http://goldbach.cse.psu.edu/s/captcha/");
//start looping over your alogrithm trying different x,y coords against ClickImage(x,y
}
public bool ClickImage(int x, int y)
{
//var beginsearch = _find.ById("captcha_img"); //this should get you the image, but you don't need
_manager.Desktop.Mouse.Click(MouseClickType.LeftClick, x, y);
Thread.sleep(1000); //wait for postback - might be handled internally though
var errordiv = _find.ById("error");
return errordiv !=null;
}

Categories