Inheritance in Specflow features - c#

I am looking for a way to implement inheritance in specflow features. For e.g a base features class which has common scenarios which have to be tested.
3-4 derived features classes which inhert all the scenarios in the base class and add some of their own. Similarly the Binding class will also follow a inheritance structure. This is needed to test an ASP.NET MVC application which has a base controller (scenarios in base features class) and 4-5 implementations.
I can copy the features file for each derived controller class but this would lead to considerable duplication.
Is this possible in specflow, Or am I going down the wrong route? Please help. thanks,

I'm not 100% sure if this is the right path to take (read as, I've never needed to do anything like this). For me any inheritance and re-use comes in the Step Definitions (or Binding) classes. But even so...
I don't know if you can simply do this using the tools available in SpecFlow - but you have the following option so far as I can see (this isn't a tested theory... I might test it later - but I figured this might offer you an idea...)
The "code-behind" (designer-generate-code) for your Feature files are partial class implementations...
...so I guess you could create a "base" (generic) Feature file...
...then create a partial class file for each of your specific/implementation Feature files' code-behinds...
...each partial class will specify a base class which is the generated class name from the "base" Feature file code-behind.
Hopefully this helps a little.
EDIT:
Okay I've tested this theory... It would work as I've described above. I just created a new project with associated test/spec project, and did the above in the test/spec project. The test runner ran the base feature, and then ran the specific/implementation feature... which included the base feature again.
Have a go - it takes less than 5 minutes to knock up what I've suggested, and see if the result fits your requirement.

Related

Can't decide between appropriate approach to unit testing of protected methods

Disclaimer: I do know that in an optimal world we only test the publics from an interface. However, in reality, we often have a pre-existent code base that wasn't developed under TDD, hence the need to a more flexible approach.
I'd like to design test methods for an ASPX page (blobb.aspx.cs) and since it's not using an interface to inherit from and there's some logic that can't be refactored out, I have to access and test the protected methods of it. I've done my googlearch and arrived at two different suggestions.
Inherit and test within as shown in this example.
Force access to other assemblies as shown in this example.
The first approach seems to be th emost widely suggested and there are tons of blogs talking about as well as answers on SO recommending it. So there seems to be a concesus on the subject. However, the second approach seems most technically proper and has an immence upvote from the community with the only eyebrowse lifter that it's very sparsely mentioned on the web. I haven't found any comparison putting the two against each other nor any reasoning on which is more appropriate in what circumstances.
Hence, me asking.
From what I was reading on MSDN it sounded like you could automatically have private accessors or InternalsVisbleTo generated for you
When you create a unit test for an internal method in C# or for a
friend method in Microsoft Visual Basic, a dialog box appears that
allows you to choose between having your internal methods accessed
with the private accessor or with the InternalsVisibleToAttribute.
From: https://msdn.microsoft.com/en-us/library/bb385974(VS.100).aspx
But then I read:
The use of Accessors has been deprecated in Visual Studio 2010 and
will not be included in future versions of Visual Studio.
From: https://msdn.microsoft.com/en-us/library/dd293546(v=vs.100).aspx
Obviously, you could still roll your own accessors, but that would be a development effort all on its own. Even auto-generating an inherited class would be a pain. And you'd just be creating a source of meta-bugs.
So it sounds like InternalsVisibleTo is the way to go and maybe you change the the protected methods to "protected internal". That way you can access them without creating another test surface for the meta-bugs to cling to.

Specflow Feature files with same steps causing multiple browser instances to launch

I have at least 3 .feature files in my C# Specflow tests project in which I have the step, for instance:
Given I am at the Home Page
When I first wrote the step in the file Feateure1.feature and created the step method, I placed it in a step file, let's say, Steps1.cs, which inherits from a base class that initializes a FirefoxDriver. All my StepsXXXX.cs classes inherit from this base class.
Then, I wrote Feature2.feature, which also has a step Given I am at the Home Page. And the step was automaticaly bound to the one in Steps1.cs
'Till now, no problem. That's pretty much what I wanted - to have reusable steps throughout the test project. But the problem is, whenever I'm running a scenario that has steps in diferent StepsXXXX files, I get various browser instances running.
======
I'm pretty sure this is due to the fact that My StepsXXXX (binding classes) all inherit from this base class that has a IWebDriver of its own, and when the step is called, everything else (including the before/after scenario methods) is called. But I can't figure out how to work around this.
I still want reusable steps. I tried to put these steps in the base class, but it did not work.
I thought of changing the bindings too, but specflow uses meaningfull strings to do so, and I don't want to change them to misleading strings.
Has anyone stumbled across this?
Any help is really appreciated.
You can use Scoped bindings using [Scope(Tag = "mytag", Feature = "feature title", Scenario = "scenario title")] to referred on specific scenario or feateure like this:
Feature: Feateure1
Scenario: demo
Given I am at the Home Page
When ....
[Binding, Scope(Feature = "Feateure1")]
public class Steps1{
[Given(#"Given I am at the Home Page")]
public void GivenIAmAtTheHomePage(){
{ }
}
Feature: Feateure2
Scenario: demo
Given I am at the Home Page
When ....
...
[Binding,Scope(Feature = "Feateure2")]
public class Steps2{
[Given(#"Given I am at the Home Page")]
public void GivenIAmAtTheHomePage(){
{ }
}
The problem is that SpecFlow bindings don't respect inheritance. All custom attributes are considered global, and so all SpecFlow does is search for a list of classes with a [Binding]then build up a dictionary for all the [Given]/[When]/[Then]s so that it can evaluate them for a best match. It will then create an instance of the class (if it hasn't already done so) and call the method on it.
As a result your simple cases all stay in the Steps1 class, because its the first perfect match. Your more complicated cases start instantiating more classes, hence multiple browsers, And your attempt to refactor won't work because your abstract base class doesn't have a [Binding] on it.
I'd probably start by flattening all your step class hierarchy, into one big AllSteps.cs class. This may seem counter-productive, but all you are really doing is arranging the code just how the current bindings appear to your SpecFlow features. This way you can start to refactor out the overlaps between the different GWT bindings.
At the moment your bindings are arranged around the scenarios. What you will need to do is refactor them around your functionality. Have a read of Whose Domain is it anyway? before you start and this will probably give you some good ideas. Then have a look at Sharing-Data-between-Bindings on the SpecFlow documentation to work out how to link between your new steps classes.
i think this is a lot more simple than the question and answers here make it out to be. there are really two questions at play here:
AISki gave you the right answer in the link to documentation about specflow context, but it was not really presented as the answer and there was distraction in presenting an inferior answer as the actual answer.
the answer as to the behavior you see is that you should expect exactly what is happening with the way you set things up. if you have multiple binding classes that create browser instances (and you do if they all have a common base that creates a browser instance) and they have matches in your features, you should expect multiple browser instances.
The answer for what you intend (a single browser shared among your steps) is that you should use the context feature of specflow to control the dependency on a browser instance. this amounts to dependency injection. your step definition classes should take a constructor dependency on something that creates your browser instance - specflow manages dependencies for you and you'll get a new instance for the first of your classes created and then the same one after that.
https://github.com/techtalk/SpecFlow/wiki/Sharing-Data-between-Bindings
I facing the same issue.
I wanted to have one feature file that will call steps in different cs classes. The issue came across when I want to setup and tear down for each scenario.
Using step class constructor and Dispose() not possible because the scenario uses more than one step class which I don't want to 'setup' multiple time in a scenario.
Using [BeforeScenario] and [AfterScenario] for both step classes also makes the runner run the before and after methods in both class that makes it setup run twice.
So what I was done is create another third class called something like BrowserScenarioSetup put the before and after scenario class in it to setup a browser for the scenario and assign to ScenarioContext.Current dictionary. When the test run, only one browser created for a scenario and I can use scenario steps defined in any class but just uses Scenario.Context.Current to get the browser instance.
I can make both step classes have a base step class and create a short method to get browser instance (or any shared instance created in setup) just to hide Scenario.Context.Current
Finally I can mark [BeforeScenario("Browser", "IE")] and use #Browser and #IE in a feature or scenario to only call this setup method in suitable context.

What are the pros and cons of making a FAT class a partial one & splitting it into partial classes?

Recently I was considering a class that seems to become fat because of too many methods in it.
A legacy code...
That has many business logic-wise methods doing all types of CRUD on various 'Etntities'.
I was thinking
make this class partial
and then grouping all methods by their target entities they work on
and splitting them into separate physical files that will be part of the partial class
Question:
Can you list pros and cons of such a refactoring, that is making a fat concrete class a partial class and splitting it into slimmer partial classes?
One pro I can think of is the reduction of conflicts/merges in your source control. You'll reduce the number of parallel check-outs and the merging headaches that invariably come when the devs check-in their work. A big pro, I think, if you have a number of devs working on the same class quite often.
I think that you are talking only about simplicity to handle the class. Performance or behaving pros and cons shouldn't be because when compiled it should generate the same result:
It is possible to split the definition of a class or a struct, or an interface over two or more source files. Each source file contains a section of the class definition, and all parts are combined when the application is compiled.
Now answering pros and cons I can think in (only about simplicity):
Pro: less conflicts / merges if working in a team.
Pro: easier to search code in the class.
Con: You need to know which files handles each code or it can get a little annoying.
I would go for the refactor. Specially considering all facilities given by the IDE where you just have to click F12 (or any other key) to go to a method, instead of opening the file.
Splitting a large class into partial classes perhaps makes life easier in the short term, but it's not really an appropriate solution to the code bloat that your class is experiencing.
From my experience, the only benefit that splitting an existing large class up gives you is that it's easier to avoid having to constantly merge code when working with other developers on said class. However, you still have the core problem of unrelated functionality being packaged into one class.
It's better to treat the breaking down to partial classes as the the very first step in a full refactoring. If you're able to easily extract related methods and members into their own partial classes (without breaking things) then you can use this as the basis for creating entirely standalone classes and rethinking the relationship between them.
Edit: I should clarify that this advice is given under the assumption that your legacy code has unrelated functionality in one class as a result of years of "just add one more method here". There are genuine reasons for having functionality spread across partial classes, for example, I've worked on code before that has a very large interface in one file, but then has all the methods grouped into partial classes based on areas of product functionality - which I think is fine.
I would say Partial class would help to maintain the code and will be more helpful when we have legacy code to avoid more changes on the reference side. Later will help to refactor easily
If you're concerned about how to refactor a class, I suggest reading into SOLID design principles.
I think you should focus on Single responsibility principle (the S in SOLID), which states an object should only have one responsibility.
I think my answer is not directly answering your question whether using partial classes would be beneficial to you, but I believe if you focus on the SOLID design principles that should at least give you some ideas on how to organize your code.
I see partial classes only as a way of extended a class that's code was generated (and can be re-generated at any time) that you would like to extend without your custom code being overwritten. You see this with the Form generated code and Entity Framework DbContext generated code for example.
Refactoring a large legacy class should probably be done by grouping and separating out single responsibilities into separate classes.

Why doesn't C# have package private?

I'm learning C# and coming from a Java world, I was a little confused to see that C# doesn't have a "package private". Most comments I've seen regarding this amount to "You cannot do it; the language wasn't designed this way". I also saw some workarounds that involve internal and partial along with comments that said these workarounds go against the language's design.
Why was C# designed this way? Also, how would I do something like the following: I have a Product class and a ProductInstance class. The only way I want a ProductInstance to be created is via a factory method in the Product class. In Java, I would put ProductInstance in the same package as Product, but make its constructor package private so that only Product would have access to it. This way, anyone who wants to create a ProductInstance can only do so via the factory method in the Product class. How would I accomplish the same thing in C#?
internal is what you are after. It means the member is accessible by any class in the same assembly. There is nothing wrong with using it for this purpose (Product & ProductInstance), and is one of the things for which it was designed. C# chose not to make namespaces significant -- they are used for organization, not to determine what types can see one another, as in java with package private.
partial is nothing at all like internal or package private. It is simply a way to split the implementation of a class into multiple files, with some extensibility options thrown in for good measure.
Packages don't really exist in the same way as they do in Java. Namespaces are used to organize code and prevent naming clashes, but not for access control. Projects/assemblies can be used for access control, but you can't have nested projects/assemblies like you can with packages.
Use internal to hide one project's members from another.

C# namespace visiblity

My situation is very simple.
I have a class A, called through WCF service, which delegates it works to several 'helper' classes. These 'helper' classes are obviously internal. The problem is that I don't want that someone may call these classes directly. I would like that they always call the class A. This means that I need a 'namespace visibility'. I think that I can simulate it by making the 'helper' classes private (so I will include them in A, which will be split, thanks to the partial keyword, into several files (one per helper class)). What is your opinion on that solution ? Is it very dirty ?
Thanks in advance.
If the helper classes are internal, code outside the assembly won't be able to call into them anyway. Do you really not trust the rest of the code in your assembly?
There's no such thing as namespace visibility in .NET, although I agree sometimes it would be useful.
I would say that using partial to effectively make one giant class would be a pretty ugly solution. I'd just leave it at internal visibility and use normal code review processes to avoid calling into the helpers from elsewhere. Heck, you may even find that the helpers are genuinely useful elsewhere (or at least some bits of them).
Yes it would be a ugly solution. The practical way would be to only include class A and other helper classes in a separate assembly with A being public and the helper classes being internal. Do you really have a legitimate concern that other classes in the same assembly should not be able to use the helper classes. Generally an assembly is a structural unit and created by one team.
Another option (if you want to resuse the helper classes) is to put all internal helper classes in one assembly and then use the InternalsVisibleToAttribute to open this classes for use from e.g. AssemblyA that has class A.
For smaller solutions, you should be OK with splitting your assembly when you need the encapsulation.
But for very large solutions (>100 projects), you might be forced to find alternatives, as visual studio starts behaving badly once you pass 150 projects - build times soar and you start running out of memory.
This has sadly not improved very much in VS2010.

Categories