Selenium Tests fail for different reasons in C# - c#

I am trying to fix a number of unit tests which use Selenium Edge Web Driver for C#.
Sometimes the tests run through once without falling, but if you run them again they fail. However, the failures are not consistent and the reasons are numerous. For example, it could time out, or cannot find an element or a title of a document on the page.
I have tried all sorts of things such using wait load until, but this is unreliable if the elements cannot be found.
Has anyone else experienced this and how was it solved?

This situation is called Fleaky tests. There could be several reasons:
Network latency(timeouts)
data related issues
Not stabil waitings
Dynamic HTML content that in your code doesnt have flexible selectors.
So, without actual code it is not possible to make any comment

Related

Which web page element tag is better to use to locate and get value with Webdrive

I am using Selenium for UI tests with C#. When I am at the starting step to test the front-end, I have to decide how to "mark" my web elements for future easy test and maintenance.
So if I have <input> or <div> or any other element, what is better to use ? id="element_id", name="element_name", class="class_name", or just Xpath ?
Or something else?
Normally, the way to set a unique element in HTML would be with the id tag. Most sites take for granted that the id of a tag is unique.
Schema is mostly used if you are identifying "items" in your site, which can be usually described this way.
Note that class is not unique at all, as it is mostly used for styles and you can use same class on multiple elements and also multiple classes on a single element.
I would suggest speaking to your Development team.
From a Developer's perspective if I am going to make changes to the UI to fix a bug or enhance the look and feel , then these changes will affect your Selenium UI Tests. the most least preferred change would be the id="element_id". Developers usually play around with CSS and if you are finding an element Find(By.CssSelector("cssselectors")), if that has changed then your test is going to fail so will Find(By.XPath("//xpath")).
I would say go with Find(By.Id("element_id"));
However, given that everything is changed, then you will have to change everything in your Test.

Comparison of HTML including handling singleton elements

I know that this has to be something with a simple solution, but I'm finding myself banging my head against it. I'm trying to write regression tests for some HTML pages generated by my company's application. They're unlikely to change frequently, but we do want checks to ensure that the correct page is displayed for every country. My impulse is to pull the HTML from the approved pages and then use Selenium to check the values. The problem I'm running into is that pulling the HTML up on different browsers yields different results when it comes to singleton elements, both the void ones and the ones that simply don't require an ending tag such as <P> and <HR>. Thus, I can't just do a text compare, and even packages such as HtmlDiff show that there's a change.
Due to the occasional lack of closing tags, my attempt to fix things by pulling the text into an XML document and then re-exporting it failed. I've had some small success with monkeying with the input to add closing tags, but I'm not an HTML or XML expert, so it feels like I'm trying to patch things with band-aids that may or may not distort the results.
Is there a simple and free solution I can use for comparing two HTML pages with the same style and check for actual equivalence despite differences in singleton elements?
One approach is to use PhantomJS and write custom Javascript to check the conformance of the pages with what you want.
(In general for this task i think every headless browser can be helpful.)

Handling/Passing multiple parameters from feature file in gherkin

How to I pass multiple parameters using feature.
I am aware of usage of "Examples:" concept but it's making feature file more complicated and unreadable in some manner
Example:
Scenario Outline: To verify that default value for some timeout when invalid/remove is set for some timeout parameter
When <parameterA> is <action> with <parameterB> for <someOtherParameterPair> in <fileName>
Then <parameterA> is updated with value <parameterB> for <someOtherParameterPair> in <fileName> as per defined <action>
Examples:
|parameterA |parameterB |action|someOtherParameterPair|fileNameWithSectionName|
|oneParameter|twoParameter|update|key:Value |abc.config:appSettings |
|oneParameter|twoParameter|delete|key:Value |def.config:appSettings |
Here, I have around 7 parameters which are coming from testcase(which I have tried to accomodated in 5 parameters due to limitation)
I would be splitting "someOtherParameterPair" and "fileNameWithSectionName" into two using split in step definition file. So in total I have around 7 parameters which will be used in Test Case.
But I am not sure whether accepting such huge number of parameters from Given/When/Then statements are feasible. It's also making my test case unreadable.
In above scenario, I am trying to modify some parameters(which I am passing from feature file so that my When/Then statements could modify) present in *.config file present at certain location.
After which I need to execute the testcase.
In the same manner, I have other(most of them) cases as well in my test suite.
Please help me is BDD right approach. Is BDD going to create some issues in maintenance as I am seeing lots of things (almost everything from) feature file.
The answer is don't write your features like this. Instead of using your feature to describe how you are testing something, use it to explain what you are testing and why you are testing it
Generally this means you don't need to use examples, and you certainly never need to use complicated examples like you've got. You can always push the usage of examples down to a lower level e.g. the step definitions.
In this case it looks like you should be writing a unit test. There is nothing of business value described in this scenario.
BDD is about describing behaviour and using that to drive development. You can't use it to test things after they have been written!!

How can I ignore some unit-tests without lowering the code-coverage percentage?

I have one class which talks to DataBase.
I have my integration-tests which talks to Db and asserts relevant changes. But I want those tests to be ignored when I commit my code because I do not want them to be called automatically later on.
(Just for development time I use them for now)
When I put [Ignore] attribute they are not called but code-coverage reduces dramatically.
Is there a way to keep those tests but not have them run automatically
on the build machine in a way that the fact that they are ignored does
not influence code-coverage percentage?
Whatever code coverage tool you use most likely has some kind of CoverageIgnoreAttribute or something along those lines (at least the ones I've used do) so you just place that on the method block that gets called from those unit tests and you should be fine.
What you request seems not to make sense. Code-Coverage is measured by executing your tests and log which statements/conditions etc. are executed. If you disable your tests, nothing get executed and your code-coverage goes down.
TestNG has groups so you can specify to only run some groups, automatically and have the others for usage outside of that. You didn't specify your unit testing framework but it might have something similar.
I do not know if this is applicable to your situation. But spontaneously I am thinking of a setup where you have two solution files (.sln), one with unit/integration tests and one without. The two solutions share the same code and project files with the exception that your development/testing solution includes your unit tests (which are built and run at compile time), and the other solution doesn't. Both solutions should be under source control but only the one without unit tests are built by the build server.
This kind of setup should not need you to change existing code (too much). Which I would prefer over rewriting code to fit your test setup.

Tracking Globalization progress

With our next major release we are looking to globalize our ASP.Net application and I was asked to think of a way to keep track of what code has been already worked on in this effort.
My thought was to use a custom Attribute and place it on all classes that have been "fixed".
What do you think?
Does anyone have a better idea?
Using an attribute to determine which classes have been globalized would then require a tool to process the code and determine which classes have and haven't been "processed", it seems like it's getting a bit complicated.
A more traditional project tracking process would probably be better - and wouldn't "pollute" your code with attributes/other markup that have no functional meaning beyond the end of the globalisation project. How about having a defect raised for each class that requires work, and tracking it that way?
What about just counting or listing the classes and then work class by class? While an attribute may be an interesting idea, I'd regard it as over-engineered. Globalizing does nothing more than, well, going through each class and globalizing the code :)
You want to finish that anyway before the next release. So go ahead and just do it one by one, and there you have your progress. I'd regard a defect raised for each class as too much either.
In my last project, I started full globalization a little late. I just went through the list of code files, from top to bottom. Alphabetically in my case, and folder after folder. So I always only had to remember which file I last worked on. That worked pretty well for me.
Edit: Another thing: In my last project, globalizing mainly involved moving hard-coded strings to resource files, and re-generating all text when the language changes at runtime. But you'll also have to think about things like number formats and the like. Microsoft's FxCop helped me with that, since it marks all number conversions etc. without specifying a culture as violations. FxCop keeps track of this, so when you resolved such a violation and re-ran FxCop, it would report the violation as missing (i.e. solved). That's especially useful for these harder-to-see things.
How about writing a unit test for each page in the app? The unit test would load the page and perform a
foreach (System.Web.UI.Control c in Page.Controls)
{
//Do work here
}
For the work part, load different globalization settings and see if the .Text property (or relevant property for your app) is different.
My assumption would be that no language should come out the same in all but the simplest cases.
Use the set of unit tests that sucessfully complete to track your progress.

Categories